TY - JOUR
T1 - Using one-shot machine learning to implement real-time multimodal learning analytics
AU - Junokas, Michael J.
AU - Kohlburn, Greg
AU - Kumar, Sahil
AU - Lane, Benjamin
AU - Fu, Wai-Tat
AU - Lindgren, Robb W
PY - 2017
Y1 - 2017
N2 - Educational research has demonstrated the importance of embodiment in the design of student learning environments, connecting bodily actions to critical concepts. Gestural recognition algorithms have become important tools in leveraging this connection but are limited in their development, focusing primarily on traditional machine-learning paradigms. We describe our approach to real-time learning analytics, using a gesture-recognition system to interpret movement in an educational context. We train a hybrid parametric, hierarchical hidden-Markov model using a one-shot construct, learning from singular, user-defined gestures. This model gives us access to three different modes of data streams: skeleton positions, kinematics features, and internal model parameters. Such a structure presents many challenges including anticipating the optimal feature sets to analyze and creating effective mapping schemas. Despite these challenges, our method allows users to facilitate productive simulation interactions, fusing of these streams into embodied semiotic structures defined by the individual. This work has important implications for the future of multimodal learning analytics and educational technology.
AB - Educational research has demonstrated the importance of embodiment in the design of student learning environments, connecting bodily actions to critical concepts. Gestural recognition algorithms have become important tools in leveraging this connection but are limited in their development, focusing primarily on traditional machine-learning paradigms. We describe our approach to real-time learning analytics, using a gesture-recognition system to interpret movement in an educational context. We train a hybrid parametric, hierarchical hidden-Markov model using a one-shot construct, learning from singular, user-defined gestures. This model gives us access to three different modes of data streams: skeleton positions, kinematics features, and internal model parameters. Such a structure presents many challenges including anticipating the optimal feature sets to analyze and creating effective mapping schemas. Despite these challenges, our method allows users to facilitate productive simulation interactions, fusing of these streams into embodied semiotic structures defined by the individual. This work has important implications for the future of multimodal learning analytics and educational technology.
KW - Cognitive embodiment
KW - Educational technology
KW - Gesture recognition
KW - One-shot machine learning
UR - http://www.scopus.com/inward/record.url?scp=85019867289&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85019867289&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:85019867289
VL - 1828
SP - 89
EP - 93
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
SN - 1613-0073
ER -