Meta Learning for End-To-End Low-Resource Speech Recognition
Journal
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Journal Volume
2020-May
Pages
7844-7848
Date Issued
2020
Author(s)
Abstract
In this paper, we proposed to apply meta learning approach for low-resource automatic speech recognition (ASR). We formulated ASR for different languages as different tasks, and meta-learned the initialization parameters from many pretraining languages to achieve fast adaptation on unseen target language, via recently proposed model-agnostic meta learning algorithm (MAML). We evaluated the proposed approach using six languages as pretraining tasks and four languages as target tasks. Preliminary results showed that the proposed method, MetaASR, significantly outperforms the state-of-the-art multitask pretraining approach on all target languages with different combinations of pretraining languages. In addition, since MAML's model-agnostic property, this paper also opens new research direction of applying meta learning to more speech-related applications. © 2020 IEEE.
Subjects
IARPA-BABEL; language adaptation; low-resource; meta-learning; speech recognition
Type
conference paper