Multimodal emotion recognition

Nicu Sebe, Ira Cohen, Thomas S. Huang

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This chapter explores new ways of human-computer interaction that enable the computer to be more aware of the user’s emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and pshysiological signals, where the diiferent modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.

Original languageEnglish (US)
Title of host publicationHandbook of Pattern Recognition and Computer Vision, 3rd Edition
PublisherWorld Scientific Publishing Co.
Pages387-410
Number of pages24
ISBN (Electronic)9789812775320
ISBN (Print)9812561056, 9789812561053
DOIs
StatePublished - Jan 1 2005

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Multimodal emotion recognition'. Together they form a unique fingerprint.

Cite this