Abstract
In this paper, human facial articulation models are derived from frontal and side view image sequences using connected vibrations non-rigid motion tracking algorithm. First, a 3D head geometric model is fitted to the face subject in the initial frame. The face model is masked with multiple planar membrane patches that are connected with each other. Then, the in-plate facial motions in the image sequences are computed from an over-determined system. Finally, this information is exploited for creating or customizing a facial articulation model.
Original language | English (US) |
---|---|
Pages | 158-162 |
Number of pages | 5 |
State | Published - 1998 |
Event | Proceedings of the 1998 International Conference on Image Processing, ICIP. Part 2 (of 3) - Chicago, IL, USA Duration: Oct 4 1998 → Oct 7 1998 |
Other
Other | Proceedings of the 1998 International Conference on Image Processing, ICIP. Part 2 (of 3) |
---|---|
City | Chicago, IL, USA |
Period | 10/4/98 → 10/7/98 |
ASJC Scopus subject areas
- Hardware and Architecture
- Computer Vision and Pattern Recognition
- Electrical and Electronic Engineering