Audio-visual affect recognition in activation-evaluation space

Zhihong Zeng, Zhenqiu Zhang, Brian Pianfetti, Jilin Tu, Thomas S. Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The ability of a computer to detect and appropriately respond to changes in a user's affective state has significant implications to Human-Computer Interaction (HCI). To more accurately simulate the human ability to assess affects through multi-sensory data, automatic affect recognition should also make use of multimodal data. In this paper, we present our efforts toward audio-visual affect recognition. Based on psychological research, we have chosen affect categories based on an activation-evaluation space which is robust in capturing significant aspects of emotion. We apply the Fisher boosting learning algorithm which can build a strong classifier by combining a small set of weak classification functions. Our experimental results show with 30 Fisher features, the testing error rates of our bimodal affect recognition is about 16% on the evaluation axis and 13% on the activation axis.

Original languageEnglish (US)
Title of host publicationIEEE International Conference on Multimedia and Expo, ICME 2005
Pages828-831
Number of pages4
DOIs
StatePublished - 2005
EventIEEE International Conference on Multimedia and Expo, ICME 2005 - Amsterdam, Netherlands
Duration: Jul 6 2005Jul 8 2005

Publication series

NameIEEE International Conference on Multimedia and Expo, ICME 2005
Volume2005

Other

OtherIEEE International Conference on Multimedia and Expo, ICME 2005
CountryNetherlands
CityAmsterdam
Period7/6/057/8/05

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint Dive into the research topics of 'Audio-visual affect recognition in activation-evaluation space'. Together they form a unique fingerprint.

Cite this