https://scholars.lib.ntu.edu.tw/handle/123456789/635593
標題: | Parallel Synthesis for Autoregressive Speech Generation | 作者: | Hsu, Po Chun Liu, Da Rong Liu, Andy T. HUNG-YI LEE |
關鍵字: | autoregressive model | neural network | neural speech synthesis | vocoder | 公開日期: | 1-一月-2023 | 卷: | 31 | 來源出版物: | IEEE/ACM Transactions on Audio Speech and Language Processing | 摘要: | Autoregressive neural vocoders have achieved outstanding performance and are widely used in speech synthesis tasks such as text-to-speech and voice conversion. An autoregressive vocoder predicts a sample at some time step conditioned on those at previous time steps. Though it can generate highly natural human speech, the iterative generation inevitably makes the synthesis time proportional to the utterance length, leading to low efficiency. Many works were dedicated to generating the whole speech time sequence in parallel and then proposed GAN-based, flow-based, and score-based vocoders. This paper proposed a new thought for the autoregressive generation. Instead of iteratively predicting samples in a time sequence, the proposed model performs frequency-wise autoregressive generation (FAR) and bit-wise autoregressive generation (BAR) to synthesize speech. In FAR, a speech utterance is first split into different frequency subbands. The proposed model generates a subband conditioned on the previously generated one. A full-band speech can then be reconstructed from these generated subbands. Similarly, in BAR, an 8-bit quantized signal is generated iteratively from the first bit. By redesigning the autoregressive method to compute in domains other than the time domain, the number of iterations in the proposed model is no longer proportional to the utterance's length but to the number of subbands/bits. The inference efficiency is hence significantly increased. Besides, a post-filter is employed to sample audio signals from output posteriors, and its training objective is designed based on the characteristics of the proposed autoregressive methods. The experimental results show that the proposed model can synthesize speech faster than real-time without GPU acceleration. Compared with the baseline autoregressive and non-autoregressive vocoders, the proposed model achieves better MUSHRA results and shows good generalization ability while synthesizing 44 kHz speech or utterances from unseen speakers. |
URI: | https://scholars.lib.ntu.edu.tw/handle/123456789/635593 | ISSN: | 23299290 | DOI: | 10.1109/TASLP.2023.3301212 |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。