Lee T.Lin Y.-L.Chiang H.Chiu M.-W.Hsu W.POLLY HUANG2019-07-102019-07-1020189781538684252https://scholars.lib.ntu.edu.tw/handle/123456789/413042We propose a cross-domain image-based 3D shape retrieval method, which learns a joint embedding space for natural images and 3D shapes in an end-to-end manner. The similarities between images and 3D shapes can be computed as the distances in this embedding space. To better encode a 3D shape, we propose a new feature aggregation method, Cross-View Convolution (CVC), which models a 3D shape as a sequence of rendered views. For bridging the gaps between images and 3D shapes, we propose a Cross-Domain Triplet Neural Network (CDTNN) that incorporates an adaptation layer to match the features from different domains better and can be trained end-to-end. In addition, we speed up the triplet training process by presenting a new fast cross-domain triplet neural network architecture. We evaluate our method on a new image to 3D shape dataset for category-level retrieval and ObjectNet3D for instance-level retrieval. Experimental results demonstrate that our method outperforms the state-of-the-art approaches in terms of retrieval performance. We also provide in-depth analysis of various design choices to further reduce the memory storage and computational cost. ? 2018 IEEE.Image-based 3D shape retrieval; Triplet loss; View sequence learningCross-domain image-based 3D shape retrieval by view sequence learningconference paper10.1109/3DV.2018.000382-s2.0-85056780269