Chen, Chih MingChih MingChenTsai, Ming FengMing FengTsaiLiu, Jen YuJen YuLiuYI-HSUAN YANG2023-10-242023-10-242013-11-189781450324045https://scholars.lib.ntu.edu.tw/handle/123456789/636474This paper proposes a context-aware approach that recommends music to a user based on the user's emotional state predicted from the article the user writes. We analyze the association between user-generated text and music by using a real-world dataset with <user, text, music> tripartite information collected from the social blogging website LiveJournal. The audio information represents various perceptual dimensions of music listening, including danceability, loudness, mode, and tempo; the emotional text information consists of bag-of-words and three dimensional affective states within an article: valence, arousal and dominance. To combine these factors for music recommendation, a factorization machine- based approach is taken. Our evaluation shows that the emotional context information mined from user-generated articles does improve the quality of recommendation, com- paring to either the collaborative filtering approach or the content-based approach. Copyright © 2013 ACM.Emotion-based music recommendation | Listening context[SDGs]SDG4Using emotional context from article for contextual music recommendationconference paper10.1145/2502081.25021702-s2.0-84887442454https://api.elsevier.com/content/abstract/scopus_id/84887442454