https://scholars.lib.ntu.edu.tw/handle/123456789/636985
標題: | Revealing the Blind Spot of Sentence Encoder Evaluation by HEROS | 作者: | Chiang, Cheng Han Chuang, Yung Sung Glass, James HUNG-YI LEE |
公開日期: | 1-一月-2023 | 來源出版物: | Proceedings of the Annual Meeting of the Association for Computational Linguistics | 摘要: | Existing sentence textual similarity benchmark datasets only use a single number to summarize how similar the sentence encoder’s decision is to humans’. However, it is unclear what kind of sentence pairs a sentence encoder (SE) would consider similar. Moreover, existing SE benchmarks mainly consider sentence pairs with low lexical overlap, so it is unclear how the SEs behave when two sentences have high lexical overlap. We introduce a high-quality SE diagnostic dataset, HEROS. HEROS is constructed by transforming an original sentence into a new sentence based on certain rules to form a minimal pair, and the minimal pair has high lexical overlaps. The rules include replacing a word with a synonym, an antonym, a typo, a random word, and converting the original sentence into its negation. Different rules yield different subsets of HEROS. By systematically comparing the performance of over 60 supervised and unsupervised SEs on HEROS, we reveal that most unsupervised sentence encoders are insensitive to negation. We find the datasets used to train the SE are the main determinants of what kind of sentence pairs an SE considers similar. We also show that even if two SEs have similar performance on STS benchmarks, they can have very different behavior on HEROS. Our result reveals the blind spot of traditional STS benchmarks when evaluating SEs. |
URI: | https://scholars.lib.ntu.edu.tw/handle/123456789/636985 | ISBN: | 9781959429777 | ISSN: | 0736587X |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。