https://scholars.lib.ntu.edu.tw/handle/123456789/636196
標題: | How to Estimate Model Transferability of Pre-Trained Speech Models? | 作者: | Chen, Zih Ching Yang, Chao Han Huck Li, Bo Zhang, Yu Chen, Nanxin Chang, Shou Yiin Prabhavalkar, Rohit HUNG-YI LEE Sainath, Tara N. |
關鍵字: | and foundation speech models | model transferability | Pre-trained speech models | transfer learning | 公開日期: | 1-一月-2023 | 卷: | 2023-August | 來源出版物: | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH | 摘要: | In this work, we introduce a “score-based assessment” framework for estimating the transferability of pre-trained speech models (PSMs) for fine-tuning target tasks. We leverage upon two representation theories, Bayesian likelihood estimation and optimal transport, to generate rank scores for the PSM candidates using the extracted representations. Our framework efficiently computes transferability scores without actual fine-tuning of candidate models or layers by making a temporal independent hypothesis. We evaluate some popular supervised speech models (e.g., Conformer RNN-Transducer) and self-supervised speech models (e.g., HuBERT) in cross-layer and cross-model settings using public data. Experimental results show a high Spearman's rank correlation and low p-value between our estimation framework and fine-tuning ground truth. Our proposed transferability framework requires less computational time and resources, making it a resource-saving and time-efficient approach for tuning speech foundation models. |
URI: | https://scholars.lib.ntu.edu.tw/handle/123456789/636196 | ISSN: | 2308457X | DOI: | 10.21437/Interspeech.2023-1079 |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。