A Multiplier-Less Convolutional Neural Network Inference Accelerator for Intelligent Edge Devices
Journal
IEEE Journal on Emerging and Selected Topics in Circuits and Systems
Journal Volume
11
Journal Issue
4
Pages
739-750
Date Issued
2021
Author(s)
Abstract
As the demand for neural network operations on edge devices increases, energy-efficient neural network inference solutions become necessary. To this end, this paper proposes a compact 4-bit number format (SD4) for neural network weights. In addition to significantly reducing the amount of neural network data transmission, SD4 also reduces the neural network convolution operation from multiplication and addition (MAC) to only addition. MNIST and CIFAR-10 CNNs with SD4 weights achieve results similar to their FP32-trained counterparts. The difference between the top-1 accuracy of 4-bit ResNet CNN for ImageNet and the baseline FP32 CNN is less than 0.5%. In the hardware design, we have implemented a multiplier-less convolution acceleration circuit. Compared with the 8-bit weight circuit, the power consumption and area of a 4-bit 3times 3 convolution circuit are reduced by nearly 50%. This work also proposes a systematic CNN deployment solution consisting of software CNN training and hardware acceleration. The proposed FPGA-based accelerator for VGG7 image classification achieves a peak throughput of 345.6 GOPS when running at a 100-MHz clock rate. The proposed convolution accelerator's power consumption and energy efficiency are 1.19W and 289. 5 GOPS/W, respectively. Compared to the CPU implementation of VGG7-128 inference, the multiplier-less acceleration circuit is 4.8 times faster and achieves 384 times higher energy efficiency. ? 2011 IEEE.
Subjects
Convolutional neural networks (CNNs)
FPGA
inference acceleration
multiplier-less
Acceleration
Codes (symbols)
Computer graphics
Convolution
Electric power utilization
Energy efficiency
Network coding
Neural networks
Neurons
Program processors
Quantization (signal)
Convolutional neural network
Encodings
Graphic processing unit
Graphics processing
Inference acceleration
Multiplierless
Processing units
Field programmable gate arrays (FPGA)
Type
journal article
