https://scholars.lib.ntu.edu.tw/handle/123456789/632062
標題: | Relating Neural Text Degeneration to Exposure Bias | 作者: | Chiang T.-R YUN-NUNG CHEN |
公開日期: | 2021 | 起(迄)頁: | 228-239 | 來源出版物: | BlackboxNLP 2021 - Proceedings of the 4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP | 摘要: | This work focuses on relating two mysteries in neural-based text generation: exposure bias, and text degeneration. Despite the long time since exposure bias was mentioned and the numerous studies for its remedy, to our knowledge, its impact on text generation has not yet been verified. Text degeneration is a problem that the widely-used pre-trained language model GPT-2 was recently found to suffer from (Holtzman et al., 2020). Motivated by the unknown causation of the text degeneration, in this paper we attempt to relate these two mysteries. Specifically, we first qualitatively and quantitatively identify mistakes made before text degeneration occurs. Then we investigate the significance of the mistakes by inspecting the hidden states in GPT-2. Our results show that text degeneration is likely to be partly caused by exposure bias. We also study the self-reinforcing mechanism of text degeneration, explaining why the mistakes amplify. In sum, our study provides a more concrete foundation for further investigation on exposure bias and text degeneration problems. © 2021 Association for Computational Linguistics. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85127262785&partnerID=40&md5=c1ed9ef534f737cab34949245c3469f7 https://scholars.lib.ntu.edu.tw/handle/123456789/632062 |
SDG/關鍵字: | Concrete foundation; Hidden state; Language model; Reinforcing mechanism; Self reinforcing; Text generations |
顯示於: | 資訊工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。