Training combination strategy of multi-stream fused hidden Markov model for audio-visual affect recognition

Zhihong Zeng, Yuxiao Hu, Ming Liu, Yun Fu, Thomas S. Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

To simulate the human ability to assess affects, an automatic affect recognition system should make use of multi-sensor information. In the framework of multi-stream fused hidden Markov model (MFHMM), we present a training combination strategy towards audio-visual affect recognition. Different from the weighting combination scheme, our approach is able to use a variety of learning methods to obtain a robust multi-stream fusion result. We evaluate our approach in personal-independent recognition of 11 affective states from 20 subjects. The experimental results suggest that MFHMM outperforms IHMM which assumes the independence among streams, and the training combination strategy has the superiority over the weighting combination under clean and varying audio channel noise condition.

Original languageEnglish (US)
Title of host publicationProceedings of the 14th Annual ACM International Conference on Multimedia, MM 2006
Pages65-68
Number of pages4
DOIs
StatePublished - 2006
Externally publishedYes
Event14th Annual ACM International Conference on Multimedia, MM 2006 - Santa Barbara, CA, United States
Duration: Oct 23 2006Oct 27 2006

Publication series

NameProceedings of the 14th Annual ACM International Conference on Multimedia, MM 2006

Other

Other14th Annual ACM International Conference on Multimedia, MM 2006
Country/TerritoryUnited States
CitySanta Barbara, CA
Period10/23/0610/27/06

Keywords

  • Affect recognition
  • Affective computing
  • Emotion recognition
  • Multimodal human-computer interaction

ASJC Scopus subject areas

  • General Computer Science
  • Media Technology

Fingerprint

Dive into the research topics of 'Training combination strategy of multi-stream fused hidden Markov model for audio-visual affect recognition'. Together they form a unique fingerprint.

Cite this