Huang, Wei PingWei PingHuangChen, Po ChunPo ChunChenHuang, Sung FengSung FengHuangHUNG-YI LEE2023-07-172023-07-172022-01-012308457Xhttps://scholars.lib.ntu.edu.tw/handle/123456789/633663This paper studies a transferable phoneme embedding framework that aims to deal with the cross-lingual text-to-speech (TTS) problem under the few-shot setting. Transfer learning is a common approach when it comes to few-shot learning since training from scratch on few-shot training data is bound to overfit. Still, we find that the naive transfer learning approach fails to adapt to unseen languages under extremely few-shot settings, where less than 8 minutes of data is provided. We deal with the problem by proposing a framework that consists of a phoneme-based TTS model and a codebook module to project phonemes from different languages into a learned latent space. Furthermore, by utilizing phoneme-level averaged self-supervised learned features, we effectively improve the quality of synthesized speeches. Experiments show that using 4 utterances, which is about 30 seconds of data, is enough to synthesize intelligible speech when adapting to an unseen language using our framework.cross-lingual | few-shot | low-resource language | self-supervised features | speech synthesis | transfer learning[SDGs]SDG4Few-Shot Cross-Lingual TTS Using Transferable Phoneme Embeddingconference paper10.21437/Interspeech.2022-9942-s2.0-85140097763https://api.elsevier.com/content/abstract/scopus_id/85140097763