A Comparison of Optimization Methods for Large-scale L1-regularized Logistic Regression
Date Issued
2008
Date
2008
Author(s)
Lee, Cheng-Yu
Abstract
Large-scale logistic regression is useful for document classification and computational linguistics. The L1-regularized form can be used for feature selection, but its non-differentiability causes more difficulties in training. Various optimization methods are proposed in recent years, but no serious comparison between them has been made. In this thesis we propose a trust region Newton method and compare several existing methods. Result shows that our method is competitive with some state-of-art L1-regularized logistic regression solvers. To investigate the applicability of L1-regularized logistic regression, we also conduct an experiment to show that compared to L2-regularized logistic regression, a sparser solution is obtained with similar accuracy.
Subjects
Logistic regression
Optimization
L1-regularized
Newton method
Feature selection
Type
thesis
File(s)![Thumbnail Image]()
Loading...
Name
ntu-97-R95922035-1.pdf
Size
23.32 KB
Format
Adobe PDF
Checksum
(MD5):536f4a2e5403c46383cd34737380fa01
