Audio-visual affect recognition

Zhihong Zeng, Jilin Tu, Ming Liu, Thomas S. Huang, Brian Pianfetti, Dan Roth, Stephen Levinson

Research output: Contribution to journalArticle

Abstract

The ability of a computer to detect and appropriately respond to changes in a user's affective state has significant implications to Human-Computer Interaction (HCI). In this paper, we present our efforts toward audio-visual affect recognition on 11 affective states customized for HCI application (four cognitive/motivational and seven basic affective states) of 20 nonactor subjects. A smoothing method is proposed to reduce the detrimental influence of speech on facial expression recognition. The feature selection analysis shows that subjects are prone to use brow movement in face, pitch and energy in prosody to express their affects while speaking. For person-dependent recognition, we apply the voting method to combine the frame-based classification results from both audio and visual channels. The result shows 7.5% improvement over the best unimodal performance. For person-independent test, we apply multistream HMM to combine the information from multiple component streams. This test shows 6.1% improvement over the best component performance.

Original languageEnglish (US)
Pages (from-to)424-428
Number of pages5
JournalIEEE Transactions on Multimedia
Volume9
Issue number2
DOIs
StatePublished - Feb 1 2007

Keywords

  • Affect recognition
  • Affective computing
  • Emotion recognition
  • Multimodal human-computer interaction

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Audio-visual affect recognition'. Together they form a unique fingerprint.

  • Cite this

    Zeng, Z., Tu, J., Liu, M., Huang, T. S., Pianfetti, B., Roth, D., & Levinson, S. (2007). Audio-visual affect recognition. IEEE Transactions on Multimedia, 9(2), 424-428. https://doi.org/10.1109/TMM.2006.886310