Chen Y.-LPANGFENG LIUWu J.-J.2021-09-022021-09-022020https://www.scopus.com/inward/record.uri?eid=2-s2.0-85103854622&doi=10.1109%2fBigData50022.2020.9377924&partnerID=40&md5=6d98204d2c4e91224cfeaeb42c68cff3https://scholars.lib.ntu.edu.tw/handle/123456789/581421In this paper, we propose an adaptive layer expansion algorithm to reduce the training time of deep neural networks without noticeable loss of accuracy. Neural networks have become much larger to improve accuracy. The size of such networks makes them time-consuming to train. Hence, we propose an adaptive layer expansion algorithm that reduces training time by dynamically adding nodes where they are necessary to improve the training efficiency while not losing accuracy. We start with a smaller model of only a fraction of parameters of the original model, then train the network and add nodes to specific layers determined by the stability of gradients. The algorithm repeatedly adds nodes until it reaches a threshold and trains the model until the accuracy converges. The experiment results indicate that our algorithm only uses a quarter of computation time of a full model and achieves 64.1% accuracy on MobileNet with dataset CIFAR100, which is only 2% less than the complete model. The algorithm stops adding nodes when it has only half of the parameters of the original model. As a result, this new model provides fast inference in those environments where both the computation power and memory storage are limited, such as mobile devices. ? 2020 IEEE.CNN; Deep Learning; Gradient Stability; Machine Learning; Model Acceleration[SDGs]SDG3[SDGs]SDG7Big data; Deep neural networks; Digital storage; Expansion; Computation power; Computation time; Fast inference; Layer expansion; Loss of accuracy; Memory storage; Original model; Training efficiency; Multilayer neural networksAn Adaptive Layer Expansion Algorithm for Efficient Training of Deep Neural Networksconference paper10.1109/BigData50022.2020.93779242-s2.0-85103854622