SERIL: Noise adaptive speech enhancement using regularization-based incremental learning
Journal
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Journal Volume
2020-October
Pages
2432-2436
Date Issued
2020
Author(s)
Abstract
Numerous noise adaptation techniques have been proposed to fine-tune deep-learning models in speech enhancement (SE) for mismatched noise environments. Nevertheless, adaptation to a new environment may lead to catastrophic forgetting of the previously learned environments. The catastrophic forgetting issue degrades the performance of SE in real-world embedded devices, which often revisit previous noise environments. The nature of embedded devices does not allow solving the issue with additional storage of all pre-trained models or earlier training data. In this paper, we propose a regularization-based incremental learning SE (SERIL) strategy, complementing existing noise adaptation strategies without using additional storage. With a regularization constraint, the parameters are updated to the new noise environment while retaining the knowledge of the previous noise environments. The experimental results show that, when faced with a new noise domain, the SERIL model outperforms the unadapted SE model. Meanwhile, compared with the current adaptive technique based on fine-tuning, the SERIL model can reduce the forgetting of previous noise environments by 52%. The results verify that the SERIL model can effectively adjust itself to new noise environments while overcoming the catastrophic forgetting issue. The results make SERIL a favorable choice for real-world SE applications, where the noise environment changes frequently. ? 2020 ISCA
Subjects
Digital storage; Speech communication; Speech enhancement; Adaptive technique; Catastrophic forgetting; Embedded device; Incremental learning; Learning models; Noise adaptation; Noise environments; Training data; Deep learning
Type
conference paper
