MelHuBERT: A Simplified Hubert on Mel Spectrograms
Journal
2023 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2023
ISBN
9798350306897
Date Issued
2023-01-01
Author(s)
Abstract
Self-supervised models have had great success in learning speech representations that can generalize to various downstream tasks. However, most self-supervised models require a large amount of compute and multiple GPUs to train, significantly hampering the development of self-supervised learning. In an attempt to reduce the computation of training, we revisit the training of HuBERT, a highly successful self-supervised model. We improve and simplify several key components, including the loss function, input representation, and training in multiple stages. Our model, MelHuBERT, is able to achieve favorable performance on phone recognition, speaker identification, and automatic speech recognition against HuBERT, while saving 31.2% of the pre-training time, or equivalently 33.5% MACs per one second speech. The code and pretrained models are available in https://github.com/nervjack2/MelHuBERT.
Subjects
Automatic Speech Recognition | Self-Supervised Learning | Speaker Recognition | Speech Representations
Type
conference paper
