Abstract
Educational research has demonstrated the importance of embodiment in the design of student learning environments, connecting bodily actions to critical concepts. Gestural recognition algorithms have become important tools in leveraging this connection but are limited in their development, focusing primarily on traditional machine-learning paradigms. We describe our approach to real-time learning analytics, using a gesture-recognition system to interpret movement in an educational context. We train a hybrid parametric, hierarchical hidden-Markov model using a one-shot construct, learning from singular, user-defined gestures. This model gives us access to three different modes of data streams: skeleton positions, kinematics features, and internal model parameters. Such a structure presents many challenges including anticipating the optimal feature sets to analyze and creating effective mapping schemas. Despite these challenges, our method allows users to facilitate productive simulation interactions, fusing of these streams into embodied semiotic structures defined by the individual. This work has important implications for the future of multimodal learning analytics and educational technology.
Original language | English (US) |
---|---|
Pages (from-to) | 89-93 |
Number of pages | 5 |
Journal | CEUR Workshop Proceedings |
Volume | 1828 |
State | Published - 2017 |
Event | Joint 6th Multimodal Learning Analytics Workshop and the Second Cross-LAK Workshop, MMLA-CrossLAK 2017 - Vancouver, Canada Duration: Mar 14 2017 → … |
Keywords
- Cognitive embodiment
- Educational technology
- Gesture recognition
- One-shot machine learning
ASJC Scopus subject areas
- General Computer Science