Lyric text mining in music mood classification

Xiao Hu, J Stephen Downie, Andreas F. Ehmann

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This research examines the role lyric text can play in improving audio music mood classification. A new method is proposed to build a large ground truth set of 5,585 songs and 18 mood categories based on social tags so as to reflect a realistic, user-centered perspective. A relatively complete set of lyric features and representation models were investigated. The best performing lyric feature set was also compared to a leading audio-based system. In combining lyric and audio sources, hybrid feature sets built with three different feature selection methods were also examined. The results show patterns at odds with findings in previous studies: audio features do not always outperform lyrics features, and combining lyrics and audio features can improve performance in many mood categories, but not all of them.

Original languageEnglish (US)
Title of host publicationProceedings of the 10th International Society for Music Information Retrieval Conference, ISMIR 2009
Pages411-416
Number of pages6
StatePublished - Dec 1 2009
Event10th International Society for Music Information Retrieval Conference, ISMIR 2009 - Kobe, Japan
Duration: Oct 26 2009Oct 30 2009

Other

Other10th International Society for Music Information Retrieval Conference, ISMIR 2009
CountryJapan
CityKobe
Period10/26/0910/30/09

ASJC Scopus subject areas

  • Music
  • Information Systems

Fingerprint Dive into the research topics of 'Lyric text mining in music mood classification'. Together they form a unique fingerprint.

  • Cite this

    Hu, X., Downie, J. S., & Ehmann, A. F. (2009). Lyric text mining in music mood classification. In Proceedings of the 10th International Society for Music Information Retrieval Conference, ISMIR 2009 (pp. 411-416)