Options
S2VC: A framework for any-to-any voice conversion with self-supervised pretrained representations
Journal
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Journal Volume
6
Pages
4785-4789
Date Issued
2021
Author(s)
Abstract
Any-to-any voice conversion (VC) aims to convert the timbre of utterances from and to any speakers seen or unseen during training. Various any-to-any VC approaches have been proposed like AUTOVC, AdaINVC, and FragmentVC. AUTOVC, and AdaINVC utilize source and target encoders to disentangle the content and speaker information of the features. FragmentVC utilizes two encoders to encode source and target information and adopts cross attention to align the source and target features with similar phonetic content. Moreover, pretrained features are adopted. AUTOVC used d-vector to extract speaker information, and self-supervised learning (SSL) features like wav2vec 2.0 is used in FragmentVC to extract the phonetic content information. Different from previous works, we proposed S2VC that utilizes Self-Supervised features as both source and target features for the VC model. Supervised phoneme posteriorgram (PPG), which is believed to be speaker-independent and widely used in VC to extract content information, is chosen as a strong baseline for SSL features. The objective evaluation and subjective evaluation both show models taking SSL feature CPC as both source and target features outperforms that taking PPG as source feature, suggesting that SSL features have great potential in improving VC. Copyright ? 2021 ISCA.
Subjects
Any-to-any
Representation learning
Self-supervised learning
Voice conversion
Linguistics
Speech communication
Supervised learning
Content information
Conversion model
Posteriorgram
Source features
Target feature
Target information
Signal encoding
Type
conference paper