https://scholars.lib.ntu.edu.tw/handle/123456789/632482
標題: | Meta-TTS: Meta-Learning for Few-Shot Speaker Adaptive Text-to-Speech | 作者: | Huang S Lin C Liu D Chen Y HUNG-YI LEE |
關鍵字: | Adaptation models; Cloning; Decoding; Encoding; few-shot; MAML; meta-learning; speaker adaptation; Task analysis; Testing; Training; TTS | 公開日期: | 2022 | 來源出版物: | IEEE/ACM Transactions on Audio Speech and Language Processing | 摘要: | Personalizing a speech synthesis system is a highly desired application, where the system can generate speech with the user's voice with rare enrolled recordings. There are two main approaches to build such a system in recent works: speaker adaptation and speaker encoding. On the one hand, speaker adaptation methods fine-tune a trained multi-speaker text-to-speech (TTS) model with few enrolled samples. However, they require at least thousands of fine-tuning steps for high-quality adaptation, making it hard to apply on devices. On the other hand, speaker encoding methods encode enrollment utterances into a speaker embedding. The trained TTS model can synthesize the user's speech conditioned on the corresponding speaker embedding. Nevertheless, the speaker encoder suffers from the generalization gap between the seen and unseen speakers. In this paper, we propose applying a meta-learning algorithm to the speaker adaptation method. More specifically, we use Model Agnostic Meta-Learning (MAML) as the training algorithm of a multi-speaker TTS model, which aims to find a great meta-initialization to adapt the model to any few-shot speaker adaptation tasks quickly. Therefore, we can also adapt the meta-trained TTS model to unseen speakers efficiently. Our experiments compare the proposed method (meta-TTS) with two baselines: a speaker adaptation method baseline and a speaker encoding method baseline. The evaluation results show that meta-TTS can synthesize high speaker-similarity speech from few enrollment samples with fewer adaptation steps than the speaker adaptation baseline and outperforms the speaker encoding baseline under the same training scheme. When the speaker encoder of the baseline is pre-trained with extra 8371 speakers of data, meta-TTS can still outperform the baseline on LibriTTS dataset and achieve comparable results on VCTK dataset. IEEE |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85128692277&doi=10.1109%2fTASLP.2022.3167258&partnerID=40&md5=d118c091123c84a4b2aa71975833b1a7 https://scholars.lib.ntu.edu.tw/handle/123456789/632482 |
ISSN: | 23299290 | DOI: | 10.1109/TASLP.2022.3167258 | SDG/關鍵字: | Encoding (symbols); Job analysis; Learning algorithms; Signal encoding; Speech recognition; Adaptation models; Encodings; Few-shot; Metalearning; Model agnostic meta-learning; Speaker adaptation; Speech models; Task analysis; Text to speech; Speech synthesis |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。