https://scholars.lib.ntu.edu.tw/handle/123456789/581421
標題: | An Adaptive Layer Expansion Algorithm for Efficient Training of Deep Neural Networks | 作者: | Chen Y.-L PANGFENG LIU Wu J.-J. |
關鍵字: | CNN; Deep Learning; Gradient Stability; Machine Learning; Model Acceleration | 公開日期: | 2020 | 起(迄)頁: | 420-425 | 來源出版物: | Proceedings - 2020 IEEE International Conference on Big Data, Big Data 2020 | 摘要: | In this paper, we propose an adaptive layer expansion algorithm to reduce the training time of deep neural networks without noticeable loss of accuracy. Neural networks have become much larger to improve accuracy. The size of such networks makes them time-consuming to train. Hence, we propose an adaptive layer expansion algorithm that reduces training time by dynamically adding nodes where they are necessary to improve the training efficiency while not losing accuracy. We start with a smaller model of only a fraction of parameters of the original model, then train the network and add nodes to specific layers determined by the stability of gradients. The algorithm repeatedly adds nodes until it reaches a threshold and trains the model until the accuracy converges. The experiment results indicate that our algorithm only uses a quarter of computation time of a full model and achieves 64.1% accuracy on MobileNet with dataset CIFAR100, which is only 2% less than the complete model. The algorithm stops adding nodes when it has only half of the parameters of the original model. As a result, this new model provides fast inference in those environments where both the computation power and memory storage are limited, such as mobile devices. ? 2020 IEEE. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85103854622&doi=10.1109%2fBigData50022.2020.9377924&partnerID=40&md5=6d98204d2c4e91224cfeaeb42c68cff3 https://scholars.lib.ntu.edu.tw/handle/123456789/581421 |
DOI: | 10.1109/BigData50022.2020.9377924 | SDG/關鍵字: | Big data; Deep neural networks; Digital storage; Expansion; Computation power; Computation time; Fast inference; Layer expansion; Loss of accuracy; Memory storage; Original model; Training efficiency; Multilayer neural networks |
顯示於: | 資訊工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。