Can Computers Outperform Humans in Detecting User Zone-Outs? Implications for Intelligent Interfaces

Nigel Bosch, Sidney K. D'Mello

Research output: Contribution to journalArticlepeer-review

Abstract

The ability to identify whether a user is "zoning out"(mind wandering) from video has many HCI (e.g., distance learning, high-stakes vigilance tasks). However, it remains unknown how well humans can perform this task, how they compare to automatic computerized approaches, and how a fusion of the two might improve accuracy. We analyzed videos of users' faces and upper bodies recorded 10s prior to self-reported mind wandering (i.e., ground truth) while they engaged in a computerized reading task. We found that a state-of-the-art machine learning model had comparable accuracy to aggregated judgments of nine untrained human observers (area under receiver operating characteristic curve [AUC] = .598 versus .589). A fusion of the two (AUC = .644) outperformed each, presumably because each focused on complementary cues. Furthermore, adding more humans beyond 3-4 observers yielded diminishing returns. We discuss implications of human-computer fusion as a means to improve accuracy in complex tasks.

Original languageEnglish (US)
Article number10
JournalACM Transactions on Computer-Human Interaction
Volume29
Issue number2
DOIs
StatePublished - Apr 2022

Keywords

  • Mind wandering
  • attention-aware interfaces
  • facial expression recognition
  • human-machine comparison

ASJC Scopus subject areas

  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Can Computers Outperform Humans in Detecting User Zone-Outs? Implications for Intelligent Interfaces'. Together they form a unique fingerprint.

Cite this