Wei-Chih HuangSung-Hsien HsiehChun-Shien LuSOO-CHANG PEI2019-10-242019-10-24201821610363https://scholars.lib.ntu.edu.tw/handle/123456789/428093https://www.scopus.com/inward/record.uri?eid=2-s2.0-85056989693&doi=10.1109%2fMLSP.2018.8516987&partnerID=40&md5=748ae1c5b07770bdf72e8180f5583766Deep neural network has revolutionized machine learning recently. However, it suffers from both high computation and memory cost such that deploying it on a hardware with limited resources (e.g., mobile devices) becomes a challenge. To address this problem, we propose a new technique, called Tensor-Train Haar-wavelet decomposition, that decomposes a large weight tensor from a fully-connected layer into a sequence of partial Haar-wavelet matrices without retraining. The novelty originates from the deterministic partial Haar-wavelet matrices such that we only need to store row indices instead of the whole matrix. Empirical results demonstrate that our method achieves efficient model compression while maintaining limited accuracy loss, even without retraining. © 2018 IEEE.Artificial intelligence; Deep neural networks; Signal processing; Tensors; Accuracy loss; Fully-connected layers; Haar wavelet decomposition; Haar wavelets; Learning network; Memory cost; Model compression; Tensor trains; Wavelet decompositionArtificial intelligence; Deep neural networks; Signal processing; Tensors; Accuracy loss; Fully-connected layers; Haar wavelet decomposition; Haar wavelets; Learning network; Memory cost; Model compression; Tensor trains; Wavelet decompositionSimple Deep Learning Network via Tensor-Train Haar Wavelet Decomposition Without Retrainingconference paper10.1109/mlsp.2018.85169872-s2.0-85056989693