Emotion recognition based on joint visual and audio cues

Nicu Sebe, Ira Cohen, Theo Gevers, Thomas S. Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. However, one necessary ingredient for natural interaction is still missing - emotions. This paper describes the problem of bimodal emotion recognition and advocates the use of probabilistic graphical models when fusing the different modalities. We test our audio-visual emotion recognition approach on 38 subjects with 11 HCI-related affect states. The experimental results show that the average person-dependent emotion recognition accuracy is greatly improved when both visual and audio information are used in classification.

Original languageEnglish (US)
Title of host publicationProceedings - 18th International Conference on Pattern Recognition, ICPR 2006
Pages1136-1139
Number of pages4
DOIs
StatePublished - 2006
Event18th International Conference on Pattern Recognition, ICPR 2006 - Hong Kong, China
Duration: Aug 20 2006Aug 24 2006

Publication series

NameProceedings - International Conference on Pattern Recognition
Volume1
ISSN (Print)1051-4651

Other

Other18th International Conference on Pattern Recognition, ICPR 2006
Country/TerritoryChina
CityHong Kong
Period8/20/068/24/06

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Emotion recognition based on joint visual and audio cues'. Together they form a unique fingerprint.

Cite this