Subspace learning for human head pose estimation

Yuxiao Hu, Thomas S Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper proposes a fully automatic framework for static human head pose estimation. With a 2D human multi-view face image as input, the face region is detected and cropped out. Then the pose of the face is assessed by the pose categories. Based on the appearance of the face region, variant subspace learning methods including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Locality Preserving Projection (LPP) and Pose-Specific Subspace (PSS) are proposed for effective representation of the face poses. Several aspects, such as human identification, illumination changes and expression variations are considered during the classification process. The experiment results on large public database demonstrate the effectiveness of the proposed framework and recognition algorithms. Performance comparisons and discussions are also provided in detail to help the algorithm selection when designing practical face pose estimation systems for different scenarios.

Original languageEnglish (US)
Title of host publication2008 IEEE International Conference on Multimedia and Expo, ICME 2008 - Proceedings
Pages1585-1588
Number of pages4
DOIs
StatePublished - Oct 23 2008
Event2008 IEEE International Conference on Multimedia and Expo, ICME 2008 - Hannover, Germany
Duration: Jun 23 2008Jun 26 2008

Publication series

Name2008 IEEE International Conference on Multimedia and Expo, ICME 2008 - Proceedings

Other

Other2008 IEEE International Conference on Multimedia and Expo, ICME 2008
Country/TerritoryGermany
CityHannover
Period6/23/086/26/08

Keywords

  • Classification
  • Face pose estimation
  • Subspace learning

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Subspace learning for human head pose estimation'. Together they form a unique fingerprint.

Cite this