Video-based Affect Detection in Noninteractive Learning Environments

Yuxuan Chen, Nigel Bosch, Sidney K. D'Mello

Research output: Contribution to conferencePaperpeer-review


The current paper explores possible solutions to the problem of detecting affective states from facial expressions during text/diagram comprehension, a context devoid of interactive events that can be used to infer affect. These data present an interesting challenge for face-based affect detection because likely locations of affective facial expressions within videos of students’ faces are entirely unknown. In the current study, students engaged in a text/diagram comprehension activity after which they self-reported their levels of confusion, frustration, and engagement. Data were chosen from various locations within the videos, and texture-based facial features were extracted to build affect detectors. Varying amounts of data were used as well to determine an appropriate window of data to analyze for each affect detector. Detector performance was measured using Area Under the ROC Curve (AUC), where chance level is .5 and perfect classification is 1. Confusion (AUC = .637), engagement (AUC = .554), and frustration (AUC = .609) were detected at above-chance levels. Prospects for improving the method of finding likely positions of affective states are also discussed.
Original languageEnglish (US)
Number of pages4
StatePublished - 2015
Externally publishedYes
Event2015 International Conference of Educational Data Mining - Madrid, Spain
Duration: Jun 26 2015Jun 29 2015
Conference number: 8


Conference2015 International Conference of Educational Data Mining
Abbreviated titleEDM 2015


  • affect detection
  • facial expression recognition
  • reading


Dive into the research topics of 'Video-based Affect Detection in Noninteractive Learning Environments'. Together they form a unique fingerprint.

Cite this