Wu, J.-Y.J.-Y.WuYu, C.C.YuFu, S.-W.S.-W.FuLiu, C.-T.C.-T.LiuSHAO-YI CHIENTsao, Y.Y.Tsao2021-05-052021-05-05201910709908https://www.scopus.com/inward/record.url?eid=2-s2.0-85077775442&partnerID=40&md5=f58524429726ba230dc50f6a91c56444https://scholars.lib.ntu.edu.tw/handle/123456789/559180The most recent studies on deep learning based speech enhancement (SE) are focused on improving denoising performance. However, successful SE applications require striking a desirable balance between the denoising performance and computational cost in real scenarios. In this study, we propose a novel parameter pruning (PP) technique, which removes redundant channels in a neural network. In addition, parameter quantization (PQ) and feature-map quantization (FQ) techniques were also integrated to generate even more compact SE models. The experimental results show that the integration of PP, PQ, and FQ can produce a compacted SE model with a size of only 9.76\compared to that of the original model, resulting in minor performance losses of 0.01 (from 0.85 to 0.84) and 0.03 (from 2.55 to 2.52) for STOI and PESQ scores, respectively. These promising results confirm that the PP, PQ, and FQ techniques can be used to effectively reduce the storage of an SE system on edge devices. © 2019 IEEE.Compactness; Low Computational Cost; Parameter Pruning; Parameter QuantizationSpeech enhancement; Compactness; Computational costs; De-noising; Feature map; Original model; Parameter Pruning; Parameter Quantization; Performance loss; Deep learningIncreasing Compactness of Deep Learning Based Speech Enhancement Models with Parameter Pruning and Quantization Techniquesjournal article10.1109/LSP.2019.29519502-s2.0-85077775442