Audio-visual affective expression recognition through multistream fused HMM

Zhihong Zeng, Jilin Tu, Brian M. Pianfetti, Thomas S. Huang

Research output: Contribution to journalArticlepeer-review

Abstract

Advances in computer processing power and emerging algorithms are allowing new ways of envisioning Human-Computer Interaction. Although the benefit of audio-visual fusion is expected for affect recognition from the psychological and engineering perspectives, most of existing approaches to automatic human affect analysis are unimodal: information processed by computer system is limited to either face images or the speech signals. This paper focuses on the development of a computing algorithm that uses both audio and visual sensors to detect and track a user's affective state to aid computer decision making. Using our Multistream Fused Hidden Markov Model (MFHMM), we analyzed coupled audio and visual streams to detect four cognitive states (interest, boredom, frustration and puzzlement) and seven prototypical emotions (neural, happiness, sadness, anger, disgust, fear and surprise). The MFHMM allows the building of an optimal connection among multiple streams according to the maximum entropy principle and the maximum mutual information criterion. Person-independent experimental results from 20 subjects in 660 sequences show that the MFHMM approach outperforms face-only HMM, pitch-only HMM, energy-only HMM, and independent HMM fusion, under clean and varying audio channel noise condition.

Original languageEnglish (US)
Article number4523967
Pages (from-to)570-577
Number of pages8
JournalIEEE Transactions on Multimedia
Volume10
Issue number4
DOIs
StatePublished - Jun 2008

Keywords

  • Affect recognition
  • Affective computing
  • Emotion recognition
  • Human computing
  • Human-computer interaction
  • Multimodal fusion

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Audio-visual affective expression recognition through multistream fused HMM'. Together they form a unique fingerprint.

Cite this