https://scholars.lib.ntu.edu.tw/handle/123456789/607336
標題: | A 1.625 TOPS/W SOC for Deep CNN Training and Inference in 28nm CMOS | 作者: | Liu Y.-T Kung C Hsieh M.-H Wang H.-W Lin C.-P Yu C.-Y Chen C.-S TZI-DAR CHIUEH |
關鍵字: | AI accelerator;low-precision neural network;machine learning;SOC;CMOS integrated circuits;Convolution;Deep learning;Energy efficiency;Programmable logic controllers;28nm CMOS;Co-designing;Convolutional neural network;Data representations;Low-precision neural network;Lower precision;Network inference;Neural networks trainings;Neural-networks;System-on-chip | 公開日期: | 2021 | 卷: | 2021-September | 起(迄)頁: | 107-110 | 來源出版物: | European Solid-State Device Research Conference | 摘要: | This work presents a FloatSD8-based system on chip (SOC) for the inference as well as the training of a convolutional neural networks (CNNs). A novel number format (FloatSD8) is employed to reduce the computational complexity of the convolution circuit. By co-designing data representation and circuit, we demonstrate that the AISOC can achieve high convolution performance and optimal energy efficiency without sacrificing the quality of training. At its normal operating condition (200MHz), the AISOC prototype is capable of 0.69 TFLOPS peak performance and 1.625 TOPS/W in 28nm CMOS. ? 2021 IEEE. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85123395015&doi=10.1109%2fESSDERC53440.2021.9631821&partnerID=40&md5=ccc1714e66fdc9c5cffb86ab84206a03 https://scholars.lib.ntu.edu.tw/handle/123456789/607336 |
ISSN: | 19308876 | DOI: | 10.1109/ESSDERC53440.2021.9631821 |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。