|Title:||Cross-domain image-based 3D shape retrieval by view sequence learning||Authors:||Lee T.
|Keywords:||Image-based 3D shape retrieval; Triplet loss; View sequence learning||Issue Date:||2018||Start page/Pages:||258-266||Source:||2018 International Conference on 3D Vision, 3DV 2018||Abstract:||
We propose a cross-domain image-based 3D shape retrieval method, which learns a joint embedding space for natural images and 3D shapes in an end-to-end manner. The similarities between images and 3D shapes can be computed as the distances in this embedding space. To better encode a 3D shape, we propose a new feature aggregation method, Cross-View Convolution (CVC), which models a 3D shape as a sequence of rendered views. For bridging the gaps between images and 3D shapes, we propose a Cross-Domain Triplet Neural Network (CDTNN) that incorporates an adaptation layer to match the features from different domains better and can be trained end-to-end. In addition, we speed up the triplet training process by presenting a new fast cross-domain triplet neural network architecture. We evaluate our method on a new image to 3D shape dataset for category-level retrieval and ObjectNet3D for instance-level retrieval. Experimental results demonstrate that our method outperforms the state-of-the-art approaches in terms of retrieval performance. We also provide in-depth analysis of various design choices to further reduce the memory storage and computational cost. ? 2018 IEEE.
|Appears in Collections:||資訊工程學系|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.