Design optimization for ADMM-Based SVM Training Processor for Edge Computing
Journal
2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems, AICAS 2021
Date Issued
2021
Author(s)
Abstract
This paper presents an optimized support vector machine (SVM) training processor employing the alternative direction method of multipliers (ADMM) optimizer. Low-rank approximation is exploited to reduce the dimension of the kernel matrix by employing the Nystr?m method. Verified in four datasets, the proposed ADMM-based training processor with rank approximation reduces 32 of matrix dimension with only 2% drop in inference accuracy. Compared to the conventional sequential minimal optimization (SMO) algorithm, the ADMM-based training algorithm is able to achieve a 9.8 10^{7} shorter latency for training 2048 samples. Hardware optimization techniques, including pre-computation and memory sharing, are proposed to reduce the computational complexity by 62% and the memory usage by 60%. As a proof of concept, an epileptic seizure detector is designed to demonstrate the effectiveness of the proposed optimization techniques. The chip achieves a 153,310 higher energy efficiency and a 364 higher throughput-to-area ratio for SVM training than a high-end CPU. This work provides a promising solution for edge devices which require low-power and real-time training. ? 2021 IEEE.
Subjects
alternative direction method of multipliers (ADMM)
hardware-efficient realization
on-line training
rank approximation
Support vector machine (SVM)
Approximation theory
Edge computing
Energy efficiency
Integrated circuit design
Matrix algebra
Optimization
Design optimization
Hardware optimization
Low rank approximations
Method of multipliers
Optimization techniques
Real-time training
Sequential minimal optimization algorithms
Training algorithms
Support vector machines
SDGs
Type
conference paper
