https://scholars.lib.ntu.edu.tw/handle/123456789/118417
標題: | 大規模L1正規化邏輯迴歸最佳化方法之比較 A Comparison of Optimization Methods for Large-scale L1-regularized Logistic Regression |
作者: | 李振宇 Lee, Cheng-Yu |
關鍵字: | 邏輯迴歸;最佳化;L1正規化;牛頓法;特徵選取;Logistic regression;Optimization;L1-regularized;Newton method;Feature selection | 公開日期: | 2008 | 摘要: | 邏輯迴歸是一種常被應用在文件分類與計算語言學上的技術。L1 正規化的邏輯迴歸可被視為一種特徵選取的方式,然而它不可微分的特性增加了問題的困難度。近年來有多種最佳化方法被用在解決這個問題上,但這些方法彼此之間卻缺乏嚴謹的比較。在這篇論文之中,我們提出了一種信賴區間牛頓法,並將它與數種已知的最佳化方法比較。實驗結果顯示我們提出的方法並不亞於目前最新的最佳化方法。另一個實驗比較了 L1 與 L2 正規化的邏輯迴歸,結果證實了在達到相似準確度的前提之下,使用 L1 正規化邏輯迴歸可得到比 L2 正規化邏輯迴歸更為稀疏的向量解。 Large-scale logistic regression is useful for document classification and computational linguistics. The L1-regularized form can be used for feature selection, but its non-differentiability causes more difficulties in training. Various optimization methods are proposed in recent years, but no serious comparison between them has been made. In this thesis we propose a trust region Newton method and compare several existing methods. Result shows that our method is competitive with some state-of-art L1-regularized logistic regression solvers. To investigate the applicability of L1-regularized logistic regression, we also conduct an experiment to show that compared to L2-regularized logistic regression, a sparser solution is obtained with similar accuracy. |
URI: | http://ntur.lib.ntu.edu.tw//handle/246246/184966 |
顯示於: | 資訊工程學系 |
檔案 | 描述 | 大小 | 格式 | |
---|---|---|---|---|
ntu-97-R95922035-1.pdf | 23.32 kB | Adobe PDF | 檢視/開啟 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。