https://scholars.lib.ntu.edu.tw/handle/123456789/428093
標題: | Simple Deep Learning Network via Tensor-Train Haar Wavelet Decomposition Without Retraining | 作者: | Wei-Chih Huang Sung-Hsien Hsieh Chun-Shien Lu SOO-CHANG PEI |
關鍵字: | Artificial intelligence; Deep neural networks; Signal processing; Tensors; Accuracy loss; Fully-connected layers; Haar wavelet decomposition; Haar wavelets; Learning network; Memory cost; Model compression; Tensor trains; Wavelet decomposition | 公開日期: | 2018 | 卷: | 2018-September | 起(迄)頁: | 1522-1527 | 來源出版物: | IEEE International Workshop on Machine Learning for Signal Processing, MLSP | 摘要: | Deep neural network has revolutionized machine learning recently. However, it suffers from both high computation and memory cost such that deploying it on a hardware with limited resources (e.g., mobile devices) becomes a challenge. To address this problem, we propose a new technique, called Tensor-Train Haar-wavelet decomposition, that decomposes a large weight tensor from a fully-connected layer into a sequence of partial Haar-wavelet matrices without retraining. The novelty originates from the deterministic partial Haar-wavelet matrices such that we only need to store row indices instead of the whole matrix. Empirical results demonstrate that our method achieves efficient model compression while maintaining limited accuracy loss, even without retraining. © 2018 IEEE. |
URI: | https://scholars.lib.ntu.edu.tw/handle/123456789/428093 https://www.scopus.com/inward/record.uri?eid=2-s2.0-85056989693&doi=10.1109%2fMLSP.2018.8516987&partnerID=40&md5=748ae1c5b07770bdf72e8180f5583766 |
ISSN: | 21610363 | DOI: | 10.1109/mlsp.2018.8516987 | SDG/關鍵字: | Artificial intelligence; Deep neural networks; Signal processing; Tensors; Accuracy loss; Fully-connected layers; Haar wavelet decomposition; Haar wavelets; Learning network; Memory cost; Model compression; Tensor trains; Wavelet decomposition |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。