Intrinsically Interpretable Artificial Neural Networks for Learner Modeling

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Modern AI algorithms are so complex, that it is often impossible even for expert AI engineers to fully explain how they make decisions. Researchers in education are increasingly using such“black-box”algorithms for a wide variety of tasks. This lack of transparency has rightfully raised concerns over issues of fairness, accountability, and trust. Post-hoc explainability techniques exist that aim to address this issue. However, studies in both educational and non-educational contexts have highlighted fundamental problems with these approaches. In this proposed project, we take an alternative approach that aims to make complex AI learner models more intrinsically interpretable, while illustrating how such interpretability can be evaluated. We aim to (1) develop an interpretable neural network, comparing accuracy and issues relevant to interpretability approaches as a whole, (2) evaluate this model’s level of interpretability using a humangrounded evaluation approach, and (3) validate the model’s inner representations and explore some hypothetical advantages of interpretable models, including their use for knowledge discovery.

Original languageEnglish (US)
Title of host publicationProceedings of the 17th International Conference on Educational Data Mining, EDM 2024
EditorsCarrie Demmans Epp, Benjamin Paaßen, David Joyner
PublisherInternational Educational Data Mining Society
Pages982-985
Number of pages4
ISBN (Print)9781733673655
DOIs
StatePublished - 2024
Event17th International Conference on Educational Data Mining, EDM 2024 - Atlanta, United States
Duration: Jul 14 2024Jul 17 2024

Publication series

NameProceedings of the International Conference on Educational Data Mining
ISSN (Electronic)2960-2866

Conference

Conference17th International Conference on Educational Data Mining, EDM 2024
Country/TerritoryUnited States
CityAtlanta
Period7/14/247/17/24

Keywords

  • evaluating interpretability
  • Explainable AI
  • interpretable neural networks
  • model transparency

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Human-Computer Interaction
  • Information Systems

Fingerprint

Dive into the research topics of 'Intrinsically Interpretable Artificial Neural Networks for Learner Modeling'. Together they form a unique fingerprint.

Cite this