Generalizability of Face-Based Mind Wandering Detection Across Task Contexts

Angela Stewart, Nigel Bosch, Sidney K. D'Mello

Research output: Contribution to conferencePaperpeer-review

Abstract

We investigate generalizability of face-based detectors of mind wandering across task contexts. We leveraged data from two lab studies: one where 152 college students read a scientific text and another where 109 college students watched a narrative film. We automatically extracted facial expressions and body motion features, which were used to train supervised machine learning models on each dataset, as well as a concatenated dataset. We applied models from each task context (scientific text or narrative film) to the alternate context to study generalizability. We found that models trained on the narrative film dataset generalized to the scientific text dataset with no modifications, but the predicted mind wandering rate needed to be adjusted before models trained on the scientific text dataset would generalize to the narrative film dataset. Additionally, we analyzed generalizability of individual features and found that the lip tightener and jaw drop action units had the greatest potential to generalize across task contexts. We discuss findings and applications of our work to attention-aware learning technologies.
Original languageEnglish (US)
Number of pages8
StatePublished - 2017
Event2017 International Conference on Educational Data Mining - Wuhan, China
Duration: Jun 25 2017Jun 28 2017
Conference number: 10

Conference

Conference2017 International Conference on Educational Data Mining
Abbreviated titleEDM 2017
CountryChina
CityWuhan
Period6/25/176/28/17

Keywords

  • mind wandering
  • mental states
  • attention aware interfaces
  • cross-corpus training

Fingerprint Dive into the research topics of 'Generalizability of Face-Based Mind Wandering Detection Across Task Contexts'. Together they form a unique fingerprint.

Cite this