https://scholars.lib.ntu.edu.tw/handle/123456789/636329
標題: | Model adaptation for personalized music emotion recognition | 作者: | YI-HSUAN YANG Wang, Ju Chiang Chen, Yu An HOMER H. CHEN |
公開日期: | 15-十二月-2015 | 來源出版物: | Handbook Of Pattern Recognition And Computer Vision (5th Edition) | 摘要: | This chapter deals with music emotion recognition (MER), which aims at associating music with the emotion it evokes in the human listener. Due to the strong connection between music and emotion, MER has emerged in recent years as a promising tool to manage music data. It can be applied in a variety of contexts, including music information retrieval, multimedia, advertisement, games, music therapy, education, and affective computing, amongst others. MER is a challenging problem because of the difficulties of i) obtaining reliable annotation of music emotion, ii) extracting relevant audio features from the mixture of instrumental music and (possibly) singing voice for modeling emotion, and iii) accommodating the subjective nature of emotion perception. The subjectivity issue, in particular, makes MER ill-posed and hard to generalize across listeners. For an MER system to work well in practice, personalization is a viable solution. The goal of this chapter describes model adaptation methods to address the subjectivity of music emotion. The personalization begins with an MER model trained offline using a general user base and progressively adapts it to a specific music listener using the emotion annotations provided by the listener. Active learning methods are useful for reducing the number of personal annotations required for achieving a target performance level. The readers may find this chapter useful for the personalization of other pattern recognition problems as well, especially for those involving numerical user ratings. |
URI: | https://scholars.lib.ntu.edu.tw/handle/123456789/636329 | ISBN: | 9789814656535 | DOI: | 10.1142/9789814656535_0010 |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。