Batch normalization processor design for convolution neural network training and inference
Journal
Proceedings - IEEE International Symposium on Circuits and Systems
Journal Volume
2021-May
Date Issued
2021
Author(s)
Abstract
In the training process of convolutional neural networks (CNN), a batch normalization (BN) layer is often inserted after a convolution layer to accelerate the convergence of CNN training. In this work, we propose a BN processor that supports both the training and inference processes. To speed up the training of CNN, the proposed work develops an efficient dataflow integrating a novel BN processor design and the processing elements for convolution acceleration. We exploited the similarities in the calculations required for the BN forward and backward passes by sharing hardware elements between both passes, therefore reducing the area overhead. In addition to functional verification of the BN processor, we also completed Automatic Placement & Routing (APR) and conducted post-APR simulation on neural network training. Finally, the proposed solution not only significantly speeds up the CNN training process, but also achieves hardware saving. ? 2021 IEEE
Subjects
Accelerator
Batch normalization
Convolutional neural network
Hardware
Training
Convolution
Convolutional neural networks
Integrated circuit design
Automatic placement
Convolution neural network
Forward-and-backward
Functional verification
Hardware elements
Inference process
Neural network training
Processing elements
Multilayer neural networks
SDGs
Type
conference paper
