https://scholars.lib.ntu.edu.tw/handle/123456789/633710
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Chen, Tsai Shien | en_US |
dc.contributor.author | Hung, Wei Chih | en_US |
dc.contributor.author | Tseng, Hung Yu | en_US |
dc.contributor.author | SHAO-YI CHIEN | en_US |
dc.contributor.author | Yang, Ming Hsuan | en_US |
dc.date.accessioned | 2023-07-17T05:59:42Z | - |
dc.date.available | 2023-07-17T05:59:42Z | - |
dc.date.issued | 2022-01-01 | - |
dc.identifier.uri | https://scholars.lib.ntu.edu.tw/handle/123456789/633710 | - |
dc.description.abstract | Self-supervised learning has recently shown great potential in vision tasks through contrastive learning, which aims to discriminate each image, or instance, in the dataset. However, such instance-level learning ignores the semantic relationship among instances and sometimes undesirably repels the anchor from the semantically similar samples, termed as “false negatives”. In this work, we show that the unfavorable effect from false negatives is more significant for the large-scale datasets with more semantic concepts. To address the issue, we propose a novel self-supervised contrastive learning framework that incrementally detects and explicitly removes the false negative samples. Specifically, following the training process, our method dynamically detects increasing high-quality false negatives considering that the encoder gradually improves and the embedding space becomes more semantically structural. Next, we discuss two strategies to explicitly remove the detected false negatives during contrastive learning. Extensive experiments show that our framework outperforms other self-supervised contrastive learning methods on multiple benchmarks in a limited resource setup. The source code is available at https://github.com/tsaishien-chen/IFND. | en_US |
dc.relation.ispartof | ICLR 2022 - 10th International Conference on Learning Representations | en_US |
dc.title | INCREMENTAL FALSE NEGATIVE DETECTION FOR CONTRASTIVE LEARNING | en_US |
dc.type | conference paper | en_US |
dc.identifier.scopus | 2-s2.0-85141584997 | - |
dc.identifier.url | https://api.elsevier.com/content/abstract/scopus_id/85141584997 | - |
item.fulltext | no fulltext | - |
item.openairecristype | http://purl.org/coar/resource_type/c_5794 | - |
item.cerifentitytype | Publications | - |
item.openairetype | conference paper | - |
item.grantfulltext | none | - |
crisitem.author.dept | Electronics Engineering | - |
crisitem.author.dept | Electrical Engineering | - |
crisitem.author.dept | Intel-NTU Connected Context Computing Center | - |
crisitem.author.dept | Networking and Multimedia | - |
crisitem.author.dept | MediaTek-NTU Research Center | - |
crisitem.author.orcid | 0000-0002-0634-6294 | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | Others: University-Level Research Centers | - |
crisitem.author.parentorg | Others: International Research Centers | - |
crisitem.author.parentorg | College of Electrical Engineering and Computer Science | - |
crisitem.author.parentorg | Others: University-Level Research Centers | - |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。