https://scholars.lib.ntu.edu.tw/handle/123456789/632499
標題: | Active learning by learning | 作者: | Hsu W.-N HSUAN-TIEN LIN |
公開日期: | 2015 | 卷: | 4 | 起(迄)頁: | 2659-2665 | 來源出版物: | Proceedings of the National Conference on Artificial Intelligence | 摘要: | Pool-based active learning is an important technique that helps reduce labeling efforts within a pool of unlabeled instances. Currently, most pool-based active learning strategies are constructed based on some human-designed philosophy; that is, they reflect what human beings assume to be "good labeling questions." However, while such human-designed philosophies can be useful on specific data sets, it is often difficult to establish the theoretical connection of those philosophies to the true learning performance of interest. In addition, given that a single human-designed philosophy is unlikely to work on all scenarios, choosing and blending those strategies under different scenarios is an important but challenging practical task. This paper tackles this task by letting the machines adaptively "learn" from the performance of a set of given strategies on a particular data set. More specifically, we design a learning algorithm that connects active learning with the well-known multi-armed bandit problem. Further, we postulate that, given an appropriate choice for the multi-armed bandit learner, it is possible to estimate the performance of different strategies on the fly. Extensive empirical studies of the resulting Albl algorithm confirm that it performs better than state-of-the-art strategies and a leading blending algorithm for active learning, all of which are based on human-designed philosophy. Copyright © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-84960192035&partnerID=40&md5=91f250d57cc83a311d7f3da2da09d36f https://scholars.lib.ntu.edu.tw/handle/123456789/632499 |
SDG/關鍵字: | Artificial intelligence; Blending; Lakes; Ontology; Philosophical aspects; Active Learning; Active learning strategies; Empirical studies; Learning performance; Multi armed bandit; Multi-armed bandit problem; Specific data sets; State of the art; Learning algorithms |
顯示於: | 資訊工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。