https://scholars.lib.ntu.edu.tw/handle/123456789/558967
標題: | Towards Unsupervised Speech Recognition and Synthesis with Quantized Speech Representation Learning | 作者: | Liu, A.H. Tu, T. HUNG-YI LEE LIN-SHAN LEE |
關鍵字: | representation quantization; speech recognition; speech representation; speech synthesis | 公開日期: | 2020 | 卷: | 2020-May | 起(迄)頁: | 7259-7263 | 來源出版物: | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings | 摘要: | In this paper we propose a Sequential Representation Quantization AutoEncoder (SeqRQ-AE) to learn from primarily unpaired audio data and produce sequences of representations very close to phoneme sequences of speech utterances. This is achieved by proper temporal segmentation to make the representations phoneme-synchronized, and proper phonetic clustering to have total number of distinct representations close to the number of phonemes. Mapping between the distinct representations and phonemes is learned from a small amount of annotated paired data. Preliminary experiments on LJSpeech demonstrated the learned representations for vowels have relative locations in latent space in good parallel to that shown in the IPA vowel chart defined by linguistics experts. With less than 20 minutes of annotated speech, our method outperformed existing methods on phoneme recognition and is able to synthesize intelligible speech that beats our baseline model. © 2020 IEEE. |
URI: | https://www.scopus.com/inward/record.url?eid=2-s2.0-85089211160&partnerID=40&md5=c7626c6dc27fb6861bcb46d4e3926b2d https://scholars.lib.ntu.edu.tw/handle/123456789/558967 |
DOI: | 10.1109/ICASSP40776.2020.9053571 | SDG/關鍵字: | Audio signal processing; Linguistics; Speech; Speech communication; Audio data; Auto encoders; Baseline models; Phoneme recognition; Relative location; Speech utterance; Temporal segmentations; Speech recognition |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。