DARTS-ASR: Differentiable architecture search for multilingual speech recognition and adaptation
Journal
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Journal Volume
2020-October
Pages
1803-1807
Date Issued
2020
Author(s)
Abstract
In previous works, only parameter weights of ASR models are optimized under fixed-topology architecture. However, the design of successful model architecture has always relied on human experience and intuition. Besides, many hyperparameters related to model architecture need to be manually tuned. Therefore in this paper, we propose an ASR approach with efficient gradient-based architecture search, DARTS-ASR. In order to examine the generalizability of DARTS-ASR, we apply our approach not only on many languages to perform monolingual ASR, but also on a multilingual ASR setting. Following previous works, we conducted experiments on a multilingual dataset, IARPA BABEL. The experiment results show that our approach outperformed the baseline fixed-topology architecture by 10.2% and 10.0% relative reduction on character error rates under monolingual and multilingual ASR settings respectively. Furthermore, we perform some analysis on the searched architectures by DARTS-ASR. Copyright ? 2020 ISCA
Subjects
Architecture; Speech communication; Topology; Character error rates; Fixed topologies; Gradient based; Hyperparameters; Model architecture; Multilingual speech recognition; Relative reduction; Speech recognition
SDGs
Type
conference paper
