TY - GEN
T1 - Can Students Understand AI Decisions Based on Variables Extracted via AutoML?
AU - Tang, Liang
AU - Bosch, Nigel
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - In computer-based education, understanding student data is essential for students, teachers, researchers, and others to adapt to insights gained from analyses (e.g., AI predictions of student outcomes). However, one important question is: how well can students make sense of the data we present? And what factors influence the interpretability of those data? This study assessed students' perceptions of predictive variables (i.e., 'features') used in machine learning models for predicting student outcomes; in particular, we explored features crafted by experts versus those extracted by methods for automatic machine learning (i.e., AutoML). Our results indicated a meaningful difference in students' interpretability perceptions between the expert and AutoML features across two diverse datasets. Additionally, features derived from timing and scoring data were more interpretable than those from interaction (e.g., keystroke) data. Other potential explanations for interpretability differences, including statistical methods, repeated exposure, and lexical familiarity, had relatively minimal impact on interpretability.
AB - In computer-based education, understanding student data is essential for students, teachers, researchers, and others to adapt to insights gained from analyses (e.g., AI predictions of student outcomes). However, one important question is: how well can students make sense of the data we present? And what factors influence the interpretability of those data? This study assessed students' perceptions of predictive variables (i.e., 'features') used in machine learning models for predicting student outcomes; in particular, we explored features crafted by experts versus those extracted by methods for automatic machine learning (i.e., AutoML). Our results indicated a meaningful difference in students' interpretability perceptions between the expert and AutoML features across two diverse datasets. Additionally, features derived from timing and scoring data were more interpretable than those from interaction (e.g., keystroke) data. Other potential explanations for interpretability differences, including statistical methods, repeated exposure, and lexical familiarity, had relatively minimal impact on interpretability.
UR - http://www.scopus.com/inward/record.url?scp=85217840811&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85217840811&partnerID=8YFLogxK
U2 - 10.1109/SMC54092.2024.10831034
DO - 10.1109/SMC54092.2024.10831034
M3 - Conference contribution
AN - SCOPUS:85217840811
T3 - Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
SP - 3342
EP - 3349
BT - 2024 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2024
Y2 - 6 October 2024 through 10 October 2024
ER -