https://scholars.lib.ntu.edu.tw/handle/123456789/558973
標題: | Mockingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders | 作者: | Liu, A.T. Yang, S.-W. Chi, P.-H. Hsu, P.-C. HUNG-YI LEE |
關鍵字: | Low resource; Speech representation learning; Transformer encoders; Unsupervised training | 公開日期: | 2020 | 卷: | 2020-May | 起(迄)頁: | 6419-6423 | 來源出版物: | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings | 摘要: | We present Mockingjay as a new speech representation learning approach, where bidirectional Transformer encoders are pre-trained on a large amount of unlabeled speech. Previous speech representation methods learn through conditioning on past frames and predicting information about future frames. Whereas Mockingjay is designed to predict the current frame through jointly conditioning on both past and future contexts. The Mockingjay representation improves performance for a wide range of downstream tasks, including phoneme classification, speaker recognition, and sentiment classification on spoken content, while outperforming other approaches. Mockingjay is empirically powerful and can be fine-tuned with downstream models, with only 2 epochs we further improve performance dramatically. In a low resource setting with only 0.1% of labeled data, we outperform the result of Mel-features that uses all 100% labeled data. © 2020 IEEE |
URI: | https://www.scopus.com/inward/record.url?eid=2-s2.0-85091177950&partnerID=40&md5=8344b0822909ac14e7695dfb35f61517 https://scholars.lib.ntu.edu.tw/handle/123456789/558973 |
ISSN: | 15206149 | DOI: | 10.1109/ICASSP40776.2020.9054458 |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。