EAVA: A 3D emotive audio-visual avatar

Hao Tang, Yun Fu, Jilin Tu, Thomas S. Huang, Mark Hasegawa-Johnson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Emotive audio-visual avatars have the potential of significantly improving the quality of Human-Computer Interaction (HCI). In this paper, the various technical approaches of a novel framework leading to a text-driven 3D Emotive Audio-Visual Avatar (EAVA) are proposed. Primary work is focused on 3D face modeling, realistic emotional facial expression animation, emotive speech synthesis, and the co-articulation of speech gestures (i.e., lip movements due to speech production) and facial expressions. Experimental results clearly indicate that a certain degree of naturalness and expressiveness has been achieved by EAVA in both audio and visual aspects. Promising potential improvements can be expected by incorporating various data-driven statistical learning models into the framework.

Original languageEnglish (US)
Title of host publication2008 IEEE Workshop on Applications of Computer Vision, WACV
DOIs
StatePublished - 2008
Event2008 IEEE Workshop on Applications of Computer Vision, WACV - Copper Mountain, CO, United States
Duration: Jan 7 2008Jan 9 2008

Publication series

Name2008 IEEE Workshop on Applications of Computer Vision, WACV

Other

Other2008 IEEE Workshop on Applications of Computer Vision, WACV
Country/TerritoryUnited States
CityCopper Mountain, CO
Period1/7/081/9/08

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'EAVA: A 3D emotive audio-visual avatar'. Together they form a unique fingerprint.

Cite this