Increasing Compactness of Deep Learning Based Speech Enhancement Models with Parameter Pruning and Quantization Techniques
Journal
IEEE Signal Processing Letters
Journal Volume
26
Journal Issue
12
Pages
1887-1891
Date Issued
2019
Author(s)
Abstract
The most recent studies on deep learning based speech enhancement (SE) are focused on improving denoising performance. However, successful SE applications require striking a desirable balance between the denoising performance and computational cost in real scenarios. In this study, we propose a novel parameter pruning (PP) technique, which removes redundant channels in a neural network. In addition, parameter quantization (PQ) and feature-map quantization (FQ) techniques were also integrated to generate even more compact SE models. The experimental results show that the integration of PP, PQ, and FQ can produce a compacted SE model with a size of only 9.76\compared to that of the original model, resulting in minor performance losses of 0.01 (from 0.85 to 0.84) and 0.03 (from 2.55 to 2.52) for STOI and PESQ scores, respectively. These promising results confirm that the PP, PQ, and FQ techniques can be used to effectively reduce the storage of an SE system on edge devices. © 2019 IEEE.
Subjects
Compactness; Low Computational Cost; Parameter Pruning; Parameter Quantization
Other Subjects
Speech enhancement; Compactness; Computational costs; De-noising; Feature map; Original model; Parameter Pruning; Parameter Quantization; Performance loss; Deep learning
Type
journal article
