Abstract
Increased attention to the relationships between affect and learning has led to the development of machine-learned models that are able to identify students’ affective states in computerized learning environments. Data for these affect detectors have been collected from multiple modalities including physical sensors, dialogue logs, and logs of students’ interactions with the learning environment. While researchers have successfully developed detectors based on each of these sources, little work has been done to compare the performance of these detectors. In this paper, we address this issue by comparing interaction-based and video-based affect detectors for a physics game called Physics Playground. Specifically, we report on the development and detection accuracy of two suites of affect and behavioral detectors. The first suite of detectors applies facial expression recognition to video data collected with webcams, while the second focuses on students’ interactions with the game as recorded in log-files. Ground–truth affect and behavior annotations for both face- and interaction-based detectors were obtained via live field observations during game-play. We first compare the performance of these detectors in predicting students’ affective states and off-task behaviors, and then proceed to outline the strengths and weakness of each approach.
Original language | English (US) |
---|---|
Number of pages | 8 |
State | Published - 2015 |
Externally published | Yes |
Event | 2015 International Conference of Educational Data Mining - Madrid, Spain Duration: Jun 26 2015 → Jun 29 2015 Conference number: 8 |
Conference
Conference | 2015 International Conference of Educational Data Mining |
---|---|
Abbreviated title | EDM 2015 |
Country/Territory | Spain |
City | Madrid |
Period | 6/26/15 → 6/29/15 |
Keywords
- video-based detectors
- interaction-based detectors
- affect
- behavior
- Physics Playground