TY - JOUR
T1 - Large language models for whole-learner support
T2 - opportunities and challenges
AU - Mannekote, Amogh
AU - Davies, Adam
AU - Pinto, Juan D.
AU - Zhang, Shan
AU - Olds, Daniel
AU - Schroeder, Noah L.
AU - Lehman, Blair
AU - Zapata-Rivera, Diego
AU - Zhai, Cheng Xiang
N1 - Publisher Copyright:
Copyright © 2024 Mannekote, Davies, Pinto, Zhang, Olds, Schroeder, Lehman, Zapata-Rivera and Zhai.
PY - 2024
Y1 - 2024
N2 - In recent years, large language models (LLMs) have seen rapid advancement and adoption, and are increasingly being used in educational contexts. In this perspective article, we explore the open challenge of leveraging LLMs to create personalized learning environments that support the “whole learner” by modeling and adapting to both cognitive and non-cognitive characteristics. We identify three key challenges toward this vision: (1) improving the interpretability of LLMs' representations of whole learners, (2) implementing adaptive technologies that can leverage such representations to provide tailored pedagogical support, and (3) authoring and evaluating LLM-based educational agents. For interpretability, we discuss approaches for explaining LLM behaviors in terms of their internal representations of learners; for adaptation, we examine how LLMs can be used to provide context-aware feedback and scaffold non-cognitive skills through natural language interactions; and for authoring, we highlight the opportunities and challenges involved in using natural language instructions to specify behaviors of educational agents. Addressing these challenges will enable personalized AI tutors that can enhance learning by accounting for each student's unique background, abilities, motivations, and socioemotional needs.
AB - In recent years, large language models (LLMs) have seen rapid advancement and adoption, and are increasingly being used in educational contexts. In this perspective article, we explore the open challenge of leveraging LLMs to create personalized learning environments that support the “whole learner” by modeling and adapting to both cognitive and non-cognitive characteristics. We identify three key challenges toward this vision: (1) improving the interpretability of LLMs' representations of whole learners, (2) implementing adaptive technologies that can leverage such representations to provide tailored pedagogical support, and (3) authoring and evaluating LLM-based educational agents. For interpretability, we discuss approaches for explaining LLM behaviors in terms of their internal representations of learners; for adaptation, we examine how LLMs can be used to provide context-aware feedback and scaffold non-cognitive skills through natural language interactions; and for authoring, we highlight the opportunities and challenges involved in using natural language instructions to specify behaviors of educational agents. Addressing these challenges will enable personalized AI tutors that can enhance learning by accounting for each student's unique background, abilities, motivations, and socioemotional needs.
KW - AI and education
KW - educational authoring tool
KW - interpretability
KW - large language model (LLM)
KW - non-cognitive aspects of learning
KW - pedagogical support of students
UR - http://www.scopus.com/inward/record.url?scp=85208628729&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85208628729&partnerID=8YFLogxK
U2 - 10.3389/frai.2024.1460364
DO - 10.3389/frai.2024.1460364
M3 - Article
C2 - 39474601
AN - SCOPUS:85208628729
SN - 2624-8212
VL - 7
JO - Frontiers in Artificial Intelligence
JF - Frontiers in Artificial Intelligence
M1 - 1460364
ER -