Abstract
We investigate generalizability of face-based detectors of mind wandering across task contexts. We leveraged data from two lab studies: one where 152 college students read a scientific text and another where 109 college students watched a narrative film. We automatically extracted facial expressions and body motion features, which were used to train supervised machine learning models on each dataset, as well as a concatenated dataset. We applied models from each task context (scientific text or narrative film) to the alternate context to study generalizability. We found that models trained on the narrative film dataset generalized to the scientific text dataset with no modifications, but the predicted mind wandering rate needed to be adjusted before models trained on the scientific text dataset would generalize to the narrative film dataset. Additionally, we analyzed generalizability of individual features and found that the lip tightener and jaw drop action units had the greatest potential to generalize across task contexts. We discuss findings and applications of our work to attention-aware learning technologies.
Original language | English (US) |
---|---|
Number of pages | 8 |
State | Published - 2017 |
Event | 2017 International Conference on Educational Data Mining - Wuhan, China Duration: Jun 25 2017 → Jun 28 2017 Conference number: 10 |
Conference
Conference | 2017 International Conference on Educational Data Mining |
---|---|
Abbreviated title | EDM 2017 |
Country/Territory | China |
City | Wuhan |
Period | 6/25/17 → 6/28/17 |
Keywords
- mind wandering
- mental states
- attention aware interfaces
- cross-corpus training