Personalized Learning with AI Tutors: Assessing and Advancing Epistemic Trustworthiness

Research output: Contribution to journalArticlepeer-review

Abstract

AI tutors are promised to expand access to personalized learning, improving student achievement and addressing disparities in resources available to students across socioeconomic contexts. The rapid development and introduction of AI tutors raises fundamental questions of epistemic trust in education. What criteria should guide students' critical assessments of the epistemic trustworthiness of these new technologies? And furthermore, how should these technologies and the environments in which they are situated be designed to improve their epistemic trustworthiness? In this article, Nicolas Tanchuk and Rebecca Taylor argue for a shared responsibility model of epistemic trust that includes a duty to collaboratively improve the epistemic environment. Building off prior frameworks, the model they advance identifies five higher-order criteria to assess the epistemic credibility of individuals, tools, and institutions and to guide the co-creation of the epistemic environment: (1) epistemic motivation, (2) epistemic inclusivity, (3) epistemic accountability, (4) epistemic accuracy, and (5) reciprocal epistemic transparency.

Original languageEnglish (US)
Pages (from-to)327-353
Number of pages27
JournalEducational Theory
Volume75
Issue number2
Early online dateMar 19 2025
DOIs
StatePublished - Apr 2025

Keywords

  • AI tutors
  • epistemic environments
  • epistemic trustworthiness
  • expertise
  • personalized learning

ASJC Scopus subject areas

  • Education

Fingerprint

Dive into the research topics of 'Personalized Learning with AI Tutors: Assessing and Advancing Epistemic Trustworthiness'. Together they form a unique fingerprint.

Cite this