https://scholars.lib.ntu.edu.tw/handle/123456789/607334
標題: | A 1.625 TOPS/W SOC for Deep CNN Training and Inference in 28nm CMOS | 作者: | Liu Y.-T Kung C Hsieh M.-H Wang H.-W Lin C.-P Yu C.-Y Chen C.-S TZI-DAR CHIUEH |
關鍵字: | AI accelerator;low-precision neural network;machine learning;SOC;CMOS integrated circuits;Energy efficiency;Machine learning;Neural networks;Programmable logic controllers;System-on-chip;28nm CMOS;Co-designing;Convolutional neural network;Data representations;Low-precision neural network;Lower precision;Network inference;Neural networks trainings;Neural-networks;Convolution | 公開日期: | 2021 | 起(迄)頁: | 107-110 | 來源出版物: | ESSCIRC 2021 - IEEE 47th European Solid State Circuits Conference, Proceedings | 摘要: | This work presents a FloatSD8-based system on chip (SOC) for the inference as well as the training of a convolutional neural networks (CNNs). A novel number format (FloatSD8) is employed to reduce the computational complexity of the convolution circuit. By co-designing data representation and circuit, we demonstrate that the AISOC can achieve high convolution performance and optimal energy efficiency without sacrificing the quality of training. At its normal operating condition (200MHz), the AISOC prototype is capable of 0.69 TFLOPS peak performance and 1.625 TOPS/W in 28nm CMOS. ? 2021 IEEE. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85118437077&doi=10.1109%2fESSCIRC53450.2021.9567756&partnerID=40&md5=3a05b7ca5dcfde6c42f6f796192626f0 https://scholars.lib.ntu.edu.tw/handle/123456789/607334 |
DOI: | 10.1109/ESSCIRC53450.2021.9567756 |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。