https://scholars.lib.ntu.edu.tw/handle/123456789/559175
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Chen, T.-S. | en_US |
dc.contributor.author | Lee, M.-Y. | en_US |
dc.contributor.author | Liu, C.-T. | en_US |
dc.contributor.author | SHAO-YI CHIEN | - |
dc.date.accessioned | 2021-05-05T03:21:16Z | - |
dc.date.available | 2021-05-05T03:21:16Z | - |
dc.date.issued | 2020 | - |
dc.identifier.issn | 21607508 | - |
dc.identifier.uri | https://www.scopus.com/inward/record.url?eid=2-s2.0-85090154925&partnerID=40&md5=9ee861c67fd545d12c876c02c415053e | - |
dc.identifier.uri | https://scholars.lib.ntu.edu.tw/handle/123456789/559175 | - |
dc.description.abstract | Vehicle re-identification (re-ID) matches images of the same vehicle across different cameras. It is fundamentally challenging because the dramatically different appearance caused by different viewpoints would make the framework fail to match two vehicles of the same identity. Most existing works solved the problem by extracting viewpoint-aware feature via spatial attention mechanism, which, yet, usually suffers from noisy generated attention map or otherwise requires expensive keypoint labels to improve the quality. In this work, we propose Viewpoint-aware Channel-wise Attention Mechanism (VCAM) by observing the attention mechanism from a different aspect. Our VCAM enables the feature learning framework channel-wisely reweighing the importance of each feature maps according to the "viewpoint" of input vehicle. Extensive experiments validate the effectiveness of the proposed method and show that we perform favorably against state-of-the-arts methods on the public VeRi-776 dataset and obtain promising results on the 2020 AI City Challenge. We also conduct other experiments to demonstrate the interpretability of how our VCAM practically assists the learning framework. © 2020 IEEE. | - |
dc.relation.ispartof | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops | - |
dc.subject.other | Arts computing; Computer vision; Attention mechanisms; Feature learning; Feature map; Interpretability; Learning frameworks; Re identifications; Spatial attention; State of the art; Vehicles | - |
dc.title | Viewpoint-aware channel-wise attentive network for vehicle re-identification | en_US |
dc.type | conference paper | en |
dc.identifier.doi | 10.1109/CVPRW50498.2020.00295 | - |
dc.identifier.scopus | 2-s2.0-85090154925 | - |
dc.relation.pages | 2448-2455 | - |
dc.relation.journalvolume | 2020-June | - |
item.cerifentitytype | Publications | - |
item.fulltext | no fulltext | - |
item.openairecristype | http://purl.org/coar/resource_type/c_5794 | - |
item.openairetype | conference paper | - |
item.grantfulltext | none | - |
crisitem.author.dept | Electronics Engineering | - |
crisitem.author.dept | Electrical Engineering | - |
crisitem.author.dept | Intel-NTU Connected Context Computing Center | - |
crisitem.author.dept | Networking and Multimedia | - |
crisitem.author.dept | MediaTek-NTU Research Center | - |
crisitem.author.orcid | 0000-0002-0634-6294 | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | Others: University-Level Research Centers | - |
crisitem.author.parentorg | Others: International Research Centers | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | Others: University-Level Research Centers | - |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。