Li, L.L.LiHSUAN-TIEN LIN2018-09-102018-09-102007https://www.scopus.com/inward/record.uri?eid=2-s2.0-41549092416&doi=10.1109%2fIJCNN.2007.4371051&partnerID=40&md5=0b5128bea409c6b1b7657730f5ed5057http://scholars.lib.ntu.edu.tw/handle/123456789/330990The 0/1 loss is an important cost function for perceptrons. Nevertheless it cannot be easily minimized by most existing perceptron learning algorithms. In this paper, we propose a family of random coordinate descent algorithms to directly minimize the 0/1 loss for perceptrons, and prove their convergence. Our algorithms are computationally efficient, and usually achieve the lowest 0/1 loss compared with other algorithms. Such advantages make them favorable for nonseparable real-world problems. Experiments show that our algorithms are especially useful for ensemble learning, and could achieve the lowest test error for many complex data sets when coupled with AdaBoost. ©2007 IEEE.Adaptive boosting; Complex datasets; Computationally efficient; Coordinate descent; Ensemble learning; Nonseparable; Perceptron learning; Real-world problem; Test errors; Cost functionsOptimizing 0/1 loss for perceptrons by random coordinate descentconference paper10.1109/IJCNN.2007.43710512-s2.0-41549092416