Wang, Ju ChiangJu ChiangWangYI-HSUAN YANGWang, Hsin MinHsin MinWangSHYH-KANG JENG2023-10-242023-10-242012-12-019780615700502https://scholars.lib.ntu.edu.tw/handle/123456789/636449In the music information retrieval (MIR) research, developing a computational model that comprehends the affective content of music signal and utilizes such a model to organize music collections have been an essential topic. Emotion perception in music is in nature subjective. Consequently, building a general emotion recognition system that performs equally well for every user could be insufficient. In contrast, it would be more desirable for one's personal computer/device being able to understand his/her perception of music emotion. In our previous work, we have developed the acoustic emotion Gaussians (AEG) model, which can learn the broad emotion perception of music from general users. Such a general music emotion model, called the background AEG model in this paper, can recognize the perceived emotion of unseen music from a general point of view. In this paper, we go one step further to realize the personalized music emotion modeling by adapting the background AEG model with a limited number of emotion annotations provided by a target user in an online and dynamic fashion. A novel maximum a posteriori (MAP)-based algorithm is proposed to achieve this in a probabilistic framework. We carry out quantitative evaluations on a well-known emotion annotated corpus, MER60, to validate the effectiveness of the proposed method for personalized music emotion recognition. © 2012 APSIPA.Personalized music emotion recognition via model adaptationconference paper2-s2.0-84874426843https://api.elsevier.com/content/abstract/scopus_id/84874426843