Model adaptation for personalized music emotion recognition
Journal
Handbook Of Pattern Recognition And Computer Vision (5th Edition)
ISBN
9789814656535
Date Issued
2015-12-15
Author(s)
Abstract
This chapter deals with music emotion recognition (MER), which aims at associating music with the emotion it evokes in the human listener. Due to the strong connection between music and emotion, MER has emerged in recent years as a promising tool to manage music data. It can be applied in a variety of contexts, including music information retrieval, multimedia, advertisement, games, music therapy, education, and affective computing, amongst others. MER is a challenging problem because of the difficulties of i) obtaining reliable annotation of music emotion, ii) extracting relevant audio features from the mixture of instrumental music and (possibly) singing voice for modeling emotion, and iii) accommodating the subjective nature of emotion perception. The subjectivity issue, in particular, makes MER ill-posed and hard to generalize across listeners. For an MER system to work well in practice, personalization is a viable solution. The goal of this chapter describes model adaptation methods to address the subjectivity of music emotion. The personalization begins with an MER model trained offline using a general user base and progressively adapts it to a specific music listener using the emotion annotations provided by the listener. Active learning methods are useful for reducing the number of personal annotations required for achieving a target performance level. The readers may find this chapter useful for the personalization of other pattern recognition problems as well, especially for those involving numerical user ratings.
Type
book chapter