Huang, Kuan PoKuan PoHuangFu, Yu KuanYu KuanFuHsu, Tsu YuanTsu YuanHsuGutierrez, Fabian RitterFabian RitterGutierrezWang, Fan LinFan LinWangTseng, Liang HsuanLiang HsuanTsengZhang, YuYuZhangHUNG-YI LEE2023-07-172023-07-172023-01-019798350396904https://scholars.lib.ntu.edu.tw/handle/123456789/633676Self-supervised learned (SSL) speech pre-trained models perform well across various speech processing tasks. Distilled versions of SSL models have been developed to match the needs of on-device speech applications. Though having similar performance as original SSL models, distilled counterparts suffer from performance degradation even more than their original versions in distorted environments. This paper proposes to apply Cross-Distortion Mapping and Domain Adversarial Training to SSL models during knowledge distillation to alleviate the performance gap caused by the domain mismatch problem. Results show consistent performance improvements under both in- and out-of-domain distorted setups for different downstream tasks while keeping efficient model size.Distortions | Domain Adversarial Training | Domain-adaptive Pre-training | Self-supervised Learning | SUPERBImproving Generalizability of Distilled Self-Supervised Speech Processing Models Under Distorted Settingsconference paper10.1109/SLT54892.2023.100224742-s2.0-85140729518https://api.elsevier.com/content/abstract/scopus_id/85140729518