Limited-memory common-directions method for large-scale optimization: convergence, parallelization, and distributed optimization
Journal
Mathematical Programming Computation
Date Issued
2022
Author(s)
Abstract
In this paper, we present a limited-memory common-directions method for smooth optimization that interpolates between first- and second-order methods. At each iteration, a subspace of a limited dimension size is constructed using first-order information from previous iterations, and an efficient Newton method is deployed to find an approximate minimizer within this subspace. With properly selected subspace of dimension as small as two, the proposed algorithm achieves the optimal convergence rates for first-order methods while remaining a descent method, and it also possesses fast convergence speed on nonconvex problems. Since the major operations of our method are dense matrix-matrix operations, the proposed method can be efficiently parallelized in multicore environments even for sparse problems. By wisely utilizing historical information, our method is also communication-efficient in distributed optimization that uses multiple machines as the Newton steps can be calculated with little communication. Numerical study shows that our method has superior empirical performance on real-world large-scale machine learning problems. © 2022, The Author(s).
Subjects
First-order method; Optimal method; Second-order method; Smooth optimization
Other Subjects
Numerical methods; Distributed optimization; First-order methods; Large-scale optimization; Limited memory; Optimal methods; Optimisations; Optimization convergence; Parallelizations; Second-order methods; Smooth optimization; Newton-Raphson method
Type
journal article
