Chen T.-SLiu C.-TWu C.-WSHAO-YI CHIEN2021-09-022021-09-02202003029743https://www.scopus.com/inward/record.uri?eid=2-s2.0-85097249109&doi=10.1007%2f978-3-030-58536-5_20&partnerID=40&md5=08d42a9c48dd0666cc91773788dafb51https://scholars.lib.ntu.edu.tw/handle/123456789/581094Vehicle re-identification (re-ID) focuses on matching images of the same vehicle across different cameras. It is fundamentally challenging because differences between vehicles are sometimes subtle. While several studies incorporate spatial-attention mechanisms to help vehicle re-ID, they often require expensive keypoint labels or suffer from noisy attention mask if not trained with expensive labels. In this work, we propose a dedicated Semantics-guided Part Attention Network (SPAN) to robustly predict part attention masks for different views of vehicles given only image-level semantic labels during training. With the help of part attention masks, we can extract discriminative features in each part separately. Then we introduce Co-occurrence Part-attentive Distance Metric (CPDM) which places greater emphasis on co-occurrence vehicle parts when evaluating the feature distance of two images. Extensive experiments validate the effectiveness of the proposed method and show that our framework outperforms the state-of-the-art approaches. ? 2020, Springer Nature Switzerland AG.Computer vision; Semantic Web; Semantics; Co-occurrence; Discriminative features; Distance metrics; Feature distance; Re identifications; Semantic labels; Spatial attention; State-of-the-art approach; Vehicles[SDGs]SDG10Orientation-Aware Vehicle Re-Identification with Semantics-Guided Part Attention Networkconference paper10.1007/978-3-030-58536-5_202-s2.0-85097249109