https://scholars.lib.ntu.edu.tw/handle/123456789/581361
標題: | Unbiased risk estimators can mislead: A case study of learning with complementary labels | 作者: | Chou Y.-T Niu G Lin H.-T Sugiyama M. HSUAN-TIEN LIN |
關鍵字: | Risk perception; Supervised learning; Different distributions; Gradient estimation; Gradient estimator; Overfitting; Risk minimization; Unbiased risk estimator; Weakly supervised learning; Zero bias; Learning systems | 公開日期: | 2020 | 卷: | PartF168147-3 | 起(迄)頁: | 1907-1916 | 來源出版物: | 37th International Conference on Machine Learning, ICML 2020 | 摘要: | In weakly supervised learning, unbiased risk estimator (URE) is a powerful tool for training classifiers when training and test data are drawn from different distributions. Nevertheless, UREs lead to overfitting in many problem settings when the models are complex like deep networks. In this paper, we investigate reasons for such overfitting by studying a weakly supervised problem called learning with complementary labels. We argue the quality of gradient estimation matters more in risk minimization. Theoretically, we show that a URE gives an unbiased gradient estimator (UGE). Practically, however, UGEs may suffer from huge variance, which causes empirical gradients to be usually far away from true gradients during minimization. To this end, we propose a novel surrogate complementary loss (SCL) framework that trades zero bias with reduced variance and makes empirical gradients more aligned with true gradients in the direction. Thanks to this characteristic, SCL successfully mitigates the overfitting issue and improves URE-based methods. ? 37th International Conference on Machine Learning, ICML 2020. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85105120092&partnerID=40&md5=7e364fe3d51fef165d0b51faa4e23f50 https://scholars.lib.ntu.edu.tw/handle/123456789/581361 |
顯示於: | 資訊工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。