Deriving facial articulation models from image sequences

Hai Tao, Thomas S Huang

Research output: Contribution to conferencePaper

Abstract

In this paper, human facial articulation models are derived from frontal and side view image sequences using connected vibrations non-rigid motion tracking algorithm. First, a 3D head geometric model is fitted to the face subject in the initial frame. The face model is masked with multiple planar membrane patches that are connected with each other. Then, the in-plate facial motions in the image sequences are computed from an over-determined system. Finally, this information is exploited for creating or customizing a facial articulation model.

Original languageEnglish (US)
Pages158-162
Number of pages5
StatePublished - Dec 1 1998
EventProceedings of the 1998 International Conference on Image Processing, ICIP. Part 2 (of 3) - Chicago, IL, USA
Duration: Oct 4 1998Oct 7 1998

Other

OtherProceedings of the 1998 International Conference on Image Processing, ICIP. Part 2 (of 3)
CityChicago, IL, USA
Period10/4/9810/7/98

ASJC Scopus subject areas

  • Hardware and Architecture
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Deriving facial articulation models from image sequences'. Together they form a unique fingerprint.

  • Cite this

    Tao, H., & Huang, T. S. (1998). Deriving facial articulation models from image sequences. 158-162. Paper presented at Proceedings of the 1998 International Conference on Image Processing, ICIP. Part 2 (of 3), Chicago, IL, USA, .