https://scholars.lib.ntu.edu.tw/handle/123456789/580916
標題: | VQVC+: One-shot voice conversion by vector quantization and U-Net architecture | 作者: | Wu D.-Y Chen Y.-H HUNG-YI LEE |
關鍵字: | Architecture; Learning systems; Signal encoding; Speech communication; Audio quality; Auto encoders; Explicit information; Information bottleneck; Latent vectors; NET architecture; Subjective evaluations; Voice conversion; Vector quantization | 公開日期: | 2020 | 卷: | 2020-October | 起(迄)頁: | 4691-4695 | 來源出版物: | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH | 摘要: | Voice conversion (VC) is a task that transforms the source speaker's timbre, accent, and tones in audio into another one's while preserving the linguistic content. It is still a challenging work, especially in a one-shot setting. Auto-encoder-based VC methods disentangle the speaker and the content in input speech without explicit information about the speaker's identity, so these methods can further generalize to unseen speakers. The disentangle capability is achieved by vector quantization (VQ), adversarial training, or instance normalization (IN). However, the imperfect disentanglement may harm the quality of output speech. In this work, to further improve audio quality, we use the U-Net architecture within an auto-encoder-based VC system. We find that to leverage the U-Net architecture, a strong information bottleneck is necessary. The VQ-based method, which quantizes the latent vectors, can serve the purpose. The objective and the subjective evaluations show that the proposed method performs well in both audio naturalness and speaker similarity. Copyright ? 2020 ISCA |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85098233777&doi=10.21437%2fInterspeech.2020-1443&partnerID=40&md5=d336f3475206a343b83f5079ac99413c https://scholars.lib.ntu.edu.tw/handle/123456789/580916 |
ISSN: | 2308457X | DOI: | 10.21437/Interspeech.2020-1443 |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。