Multimodal human emotion/expression recognition

Lawrence S. Chen, Thomas S. Huang, Tsutomu Miyasato, Ryohei Nakatsu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recognizing human facial expression and emotion by computer is an interesting and challenging problem. Many have investigated emotional contents in speech alone, or recognition of human facial expressions solely from images. However, relatively little has been done in combining these two modalities for recognizing human emotions. L.C. De Silva et al. (1997) studied human subjects' ability to recognize emotions from viewing video clips of facial expressions and listening to the corresponding emotional speech stimuli. They found that humans recognize some emotions better by audio information, and other emotions better by video. They also proposed an algorithm to integrate both kinds of inputs to mimic human's recognition process. While attempting to implement the algorithm, we encountered difficulties which led us to a different approach. We found these two modalities to be complimentary. By using both, we show it is possible to achieve higher recognition rates than either modality alone.

Original languageEnglish (US)
Title of host publicationProceedings - 3rd IEEE International Conference on Automatic Face and Gesture Recognition, FG 1998
PublisherIEEE Computer Society
Pages366-371
Number of pages6
ISBN (Print)0818683449, 9780818683442
DOIs
StatePublished - 1998
Event3rd IEEE International Conference on Automatic Face and Gesture Recognition, FG 1998 - Nara, Japan
Duration: Apr 14 1998Apr 16 1998

Publication series

NameProceedings - 3rd IEEE International Conference on Automatic Face and Gesture Recognition, FG 1998

Other

Other3rd IEEE International Conference on Automatic Face and Gesture Recognition, FG 1998
Country/TerritoryJapan
CityNara
Period4/14/984/16/98

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Cite this