https://scholars.lib.ntu.edu.tw/handle/123456789/559180
標題: | Increasing Compactness of Deep Learning Based Speech Enhancement Models with Parameter Pruning and Quantization Techniques | 作者: | Wu, J.-Y. Yu, C. Fu, S.-W. Liu, C.-T. SHAO-YI CHIEN Tsao, Y. |
關鍵字: | Compactness; Low Computational Cost; Parameter Pruning; Parameter Quantization | 公開日期: | 2019 | 卷: | 26 | 期: | 12 | 起(迄)頁: | 1887-1891 | 來源出版物: | IEEE Signal Processing Letters | 摘要: | The most recent studies on deep learning based speech enhancement (SE) are focused on improving denoising performance. However, successful SE applications require striking a desirable balance between the denoising performance and computational cost in real scenarios. In this study, we propose a novel parameter pruning (PP) technique, which removes redundant channels in a neural network. In addition, parameter quantization (PQ) and feature-map quantization (FQ) techniques were also integrated to generate even more compact SE models. The experimental results show that the integration of PP, PQ, and FQ can produce a compacted SE model with a size of only 9.76\compared to that of the original model, resulting in minor performance losses of 0.01 (from 0.85 to 0.84) and 0.03 (from 2.55 to 2.52) for STOI and PESQ scores, respectively. These promising results confirm that the PP, PQ, and FQ techniques can be used to effectively reduce the storage of an SE system on edge devices. © 2019 IEEE. |
URI: | https://www.scopus.com/inward/record.url?eid=2-s2.0-85077775442&partnerID=40&md5=f58524429726ba230dc50f6a91c56444 https://scholars.lib.ntu.edu.tw/handle/123456789/559180 |
ISSN: | 10709908 | DOI: | 10.1109/LSP.2019.2951950 | SDG/關鍵字: | Speech enhancement; Compactness; Computational costs; De-noising; Feature map; Original model; Parameter Pruning; Parameter Quantization; Performance loss; Deep learning |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。