TY - GEN
T1 - Detecting student engagement
T2 - 24th ACM International Conference on User Modeling, Adaptation, and Personalization, UMAP 2016
AU - Bosch, Nigel
N1 - Funding Information:
I would like to thank my advisor, Sidney D'Mello, for his guidance on this research. This research was supported by the National Science Foundation (NSF) (DRL 1235958 and IIS 1523091). Any opinions, findings and conclusions, or recommendations expressed in this paper are those of the author(s) and do not necessarily reflect the views of the NSF.
PY - 2016/7/13
Y1 - 2016/7/13
N2 - Engagement is complex and multifaceted, but crucial to learning. Computerized learning environments can provide a superior learning experience for students by automatically detecting student engagement (and, thus also disengagement) and adapting to it. This paper describes results from several previous studies that utilized facial features to automatically detect student engagement, and proposes new methods to expand and improve results. Videos of students will be annotated by third-party observers as mind wandering (disengaged) or not mind wandering (engaged). Automatic detectors will also be trained to classify the same videos based on students' facial features, and compared to the machine predictions. These detectors will then be improved by engineering features to capture facial expressions noted by observers and more heavily weighting training instances that were exceptionally-well classified by observers. Finally, implications of previous results and proposed work are discussed. Copyright is held by the owner/author(s).
AB - Engagement is complex and multifaceted, but crucial to learning. Computerized learning environments can provide a superior learning experience for students by automatically detecting student engagement (and, thus also disengagement) and adapting to it. This paper describes results from several previous studies that utilized facial features to automatically detect student engagement, and proposes new methods to expand and improve results. Videos of students will be annotated by third-party observers as mind wandering (disengaged) or not mind wandering (engaged). Automatic detectors will also be trained to classify the same videos based on students' facial features, and compared to the machine predictions. These detectors will then be improved by engineering features to capture facial expressions noted by observers and more heavily weighting training instances that were exceptionally-well classified by observers. Finally, implications of previous results and proposed work are discussed. Copyright is held by the owner/author(s).
KW - Affective computing
KW - Engagement detection
KW - Facial expressions
UR - http://www.scopus.com/inward/record.url?scp=84984910859&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84984910859&partnerID=8YFLogxK
U2 - 10.1145/2930238.2930371
DO - 10.1145/2930238.2930371
M3 - Conference contribution
AN - SCOPUS:84984910859
T3 - UMAP 2016 - Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization
SP - 317
EP - 320
BT - UMAP 2016 - Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization
PB - Association for Computing Machinery
Y2 - 13 July 2016 through 17 July 2016
ER -