Optimizing 0/1 loss for perceptrons by random coordinate descent
Journal
IEEE International Conference on Neural Networks
Pages
749-754
Date Issued
2007
Author(s)
Li, L.
Abstract
The 0/1 loss is an important cost function for perceptrons. Nevertheless it cannot be easily minimized by most existing perceptron learning algorithms. In this paper, we propose a family of random coordinate descent algorithms to directly minimize the 0/1 loss for perceptrons, and prove their convergence. Our algorithms are computationally efficient, and usually achieve the lowest 0/1 loss compared with other algorithms. Such advantages make them favorable for nonseparable real-world problems. Experiments show that our algorithms are especially useful for ensemble learning, and could achieve the lowest test error for many complex data sets when coupled with AdaBoost. ©2007 IEEE.
Other Subjects
Adaptive boosting; Complex datasets; Computationally efficient; Coordinate descent; Ensemble learning; Nonseparable; Perceptron learning; Real-world problem; Test errors; Cost functions
Type
conference paper