TY - JOUR
T1 - Recognizing emotions from an ensemble of features
AU - Tariq, Usman
AU - Lin, Kai Hsiang
AU - Li, Zhen
AU - Zhou, Xi
AU - Wang, Zhaowen
AU - Le, Vuong
AU - Huang, Thomas S.
AU - Lv, Xutao
AU - Han, Tony X.
N1 - Funding Information:
Manuscript received May 11, 2011 revised November 3, 2011 and February 15, 2012; accepted March 6, 2012. Date of publication May 3, 2012; date of current version July 13, 2012. This work was supported by a Google Faculty Research Award. This paper was recommended by Associate Editor M. Pantic.
PY - 2012
Y1 - 2012
N2 - This paper details the authors' efforts to push the baseline of emotion recognition performance on the Geneva Multimodal Emotion Portrayals (GEMEP) Facial Expression Recognition and Analysis database. Both subject-dependent and subject-independent emotion recognition scenarios are addressed in this paper. The approach toward solving this problem involves face detection, followed by key-point identification, then feature generation, and then, finally, classification. An ensemble of features consisting of hierarchical Gaussianization, scale-invariant feature transform, and some coarse motion features have been used. In the classification stage, we used support vector machines. The classification task has been divided into person-specific and person-independent emotion recognitions using face recognition with either manual labels or automatic algorithms. We achieve 100% performance for the person-specific one, 66% performance for the person-independent one, and 80% performance for overall results, in terms of classification rate, for emotion recognition with manual identification of subjects.
AB - This paper details the authors' efforts to push the baseline of emotion recognition performance on the Geneva Multimodal Emotion Portrayals (GEMEP) Facial Expression Recognition and Analysis database. Both subject-dependent and subject-independent emotion recognition scenarios are addressed in this paper. The approach toward solving this problem involves face detection, followed by key-point identification, then feature generation, and then, finally, classification. An ensemble of features consisting of hierarchical Gaussianization, scale-invariant feature transform, and some coarse motion features have been used. In the classification stage, we used support vector machines. The classification task has been divided into person-specific and person-independent emotion recognitions using face recognition with either manual labels or automatic algorithms. We achieve 100% performance for the person-specific one, 66% performance for the person-independent one, and 80% performance for overall results, in terms of classification rate, for emotion recognition with manual identification of subjects.
KW - Biometrics
KW - computer vision
KW - emotion recognition
KW - machine vision
UR - http://www.scopus.com/inward/record.url?scp=84864127477&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84864127477&partnerID=8YFLogxK
U2 - 10.1109/TSMCB.2012.2194701
DO - 10.1109/TSMCB.2012.2194701
M3 - Article
C2 - 22575690
AN - SCOPUS:84864127477
SN - 1083-4419
VL - 42
SP - 1017
EP - 1026
JO - IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
JF - IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IS - 4
M1 - 6194349
ER -