Galvan GLapucci MLin C.-JSciandrone M.CHIH-JEN LIN2021-09-022021-09-02202115324435https://www.scopus.com/inward/record.uri?eid=2-s2.0-85105859040&partnerID=40&md5=0fe85483ab0e8d4d349e46b935f00996https://scholars.lib.ntu.edu.tw/handle/123456789/581292In this work we present a novel way to solve the sub-problems that originate when using decomposition algorithms to train Support Vector Machines (SVMs). State-of-the-art Sequential Minimization Optimization (SMO) solvers reduce the original problem to a sequence of sub-problems of two variables for which the solution is analytical. Although considering more than two variables at a time usually results in a lower number of iterations needed to train an SVM model, solving the sub-problem becomes much harder and the overall computational gains are limited, if any. We propose to apply the two-variables decomposition method to solve the sub-problems themselves and experimentally show that it is a viable and efficient way to deal with sub-problems of up to 50 variables. As a second contribution we explore different ways to select the working set and its size, combining firstorder and second-order working set selection rules together with a strategy for exploiting cached elements of the Hessian matrix. An extensive numerical comparison shows that the method performs considerably better than state-of-the-art software. © 2021 Giulio Galvan, Matteo Lapucci, Chih-Jen Lin and Marco Sciandrone. ? 2021 Microtome Publishing. All rights reserved.Numerical methods; Decomposition algorithm; Decomposition methods; Number of iterations; Numerical comparison; Sequential minimization optimizations; Support vector machine (SVMs); Two-level decomposition; Working set selection; Support vector machinesA two-level decomposition framework exploiting first and second order information for svm training problemsjournal article2-s2.0-85105859040