TY - GEN
T1 - Multi-view facial expression recognition
AU - Hu, Yuxiao
AU - Zeng, Zhihong
AU - Yin, Lijun
AU - Wei, Xiaozhou
AU - Zhou, Xi
AU - Huang, Thomas S.
PY - 2008
Y1 - 2008
N2 - The ability to handle multi-view facial expressions is important for computers to understand affective behavior under less constrained environment. However, most of existing methods for facial expression recognition are based on the near-frontal view face data, which are likely to fail in the non-frontal facial expression analysis. In this paper, we conduct an investigation on analyzing multi-view facial expressions. Three local patch descriptors (HoG, LBP, and SIFT) are used to extract facial features, which are the inputs to a nearest-neighbor indexing method that identifies facial expressions. We also investigate the influence of feature dimension reductions (PCA, LDA, and LPP) and classifier fusion on the recognition performance. We test our approaches on multi-view data generated fromBU-3DFE 3D facial expression database that includes 100 subjects with 6 emotions and 4 intensity levels. Our extensive person-independent experiments suggest that the SIFT descriptor outperforms HoG and LBP, and LPP outperforms PCA and LDA in this application. But the classifier fusion does not show a significant advantage over SIFT-only classifier.
AB - The ability to handle multi-view facial expressions is important for computers to understand affective behavior under less constrained environment. However, most of existing methods for facial expression recognition are based on the near-frontal view face data, which are likely to fail in the non-frontal facial expression analysis. In this paper, we conduct an investigation on analyzing multi-view facial expressions. Three local patch descriptors (HoG, LBP, and SIFT) are used to extract facial features, which are the inputs to a nearest-neighbor indexing method that identifies facial expressions. We also investigate the influence of feature dimension reductions (PCA, LDA, and LPP) and classifier fusion on the recognition performance. We test our approaches on multi-view data generated fromBU-3DFE 3D facial expression database that includes 100 subjects with 6 emotions and 4 intensity levels. Our extensive person-independent experiments suggest that the SIFT descriptor outperforms HoG and LBP, and LPP outperforms PCA and LDA in this application. But the classifier fusion does not show a significant advantage over SIFT-only classifier.
UR - http://www.scopus.com/inward/record.url?scp=67650652464&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=67650652464&partnerID=8YFLogxK
U2 - 10.1109/AFGR.2008.4813445
DO - 10.1109/AFGR.2008.4813445
M3 - Conference contribution
AN - SCOPUS:67650652464
SN - 9781424421541
T3 - 2008 8th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2008
BT - 2008 8th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2008
T2 - 2008 8th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2008
Y2 - 17 September 2008 through 19 September 2008
ER -