https://scholars.lib.ntu.edu.tw/handle/123456789/607473
Title: | SEOFP-NET: Compression and Acceleration of Deep Neural Networks for Speech Enhancement Using Sign-Exponent-Only Floating-Points | Authors: | Lin Y Yu C Hsu Y Fu S Tsao Y TEI-WEI KUO |
Keywords: | Computational modeling;deep neural network model compression;floating-point integer arithmetic circuit;inference acceleration;Inference algorithms;Noise measurement;Noise reduction;Signal processing algorithms;speech dereverberation;Speech enhancement;speech enhancement;Task analysis;Audio signal processing;Digital arithmetic;Inference engines;Speech communication;Timing circuits;Arithmetic circuit;Computational modelling;Deep neural network model compression;Floating points;Floating-point integer arithmetic circuit;Inference acceleration;Inference algorithm;Integer arithmetic;Model compression;Neural network model;Noise measurements;Speech dereverberation;Deep neural networks | Issue Date: | 2021 | Journal Volume: | 30 | Start page/Pages: | 1016-1031 | Source: | IEEE/ACM Transactions on Audio Speech and Language Processing | Abstract: | Numerous compression and acceleration strategies have achieved outstanding results on classification tasks in various fields, such as computer vision and speech signal processing. Nevertheless, the same strategies have yielded ungratified performance on regression tasks because the nature between these and classification tasks differs. In this paper, a novel sign-exponent- only floating-point network (SEOFP-NET) technique is proposed to compress the model size and accelerate the inference time for speech enhancement, a regression task of speech signal processing. The proposed method compressed the sizes of deep neural network (DNN)-based speech enhancement models by quantizing the fraction bits of single-precision floating-point parameters during training. Before inference implementation, all parameters in the trained SEOFP-NET model are slightly adjusted to accelerate the inference time by replacing the floating-point multiplier with an integer-adder. For generalization, the SEOFP-NET technique is introduced to different speech enhancement tasks in speech signal processing with different model architectures under various corpora. The experimental results indicate that the size of SEOFP-NET models can be significantly compressed by up to 81.249% without noticeably downgrading their speech enhancement performance, and the inference time can be accelerated to 1.212x compared with the baseline models. The results also verify that the proposed SEOFP-NET can cooperate with other efficiency strategies to achieve a synergy effect for model compression. In addition, the just noticeable difference (JND) was applied to the user study experiment to statistically analyze the effect of speech enhancement on listening. The results indicate that the listeners cannot facilely differentiate between the enhanced speech signals processed by the baseline model and the proposed SEOFP-NET. To the best knowledge of the authors, this study is one of the first research works to substantially compress the size of DNN-based algorithms and reduce the inference time of speech enhancement simultaneously while maintaining satisfactory enhancement performance. The promising results suggest that the application of DNN-based speech enhancement algorithms with the proposed SEOFP-NET technique is more suitable to light-weight embedded devices. Author |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85121785550&doi=10.1109%2fTASLP.2021.3133209&partnerID=40&md5=8919a8d473529d35fc30a82bc293bce0 https://scholars.lib.ntu.edu.tw/handle/123456789/607473 |
ISSN: | 23299290 | DOI: | 10.1109/TASLP.2021.3133209 | SDG/Keyword: | Audio signal processing; Digital arithmetic; Inference engines; Speech communication; Speech enhancement; Timing circuits; Arithmetic circuit; Computational modelling; Deep neural network model compression; Floating points; Floating-point integer arithmetic circuit; Inference acceleration; Inference algorithm; Integer arithmetic; Model compression; Neural network model; Noise measurements; Signal processing algorithms; Speech dereverberation; Task analysis; Deep neural networks |
Appears in Collections: | 資訊工程學系 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.