Deep Reinforcement Learning for Adaptive Learning Systems

Xiao Li, Hanchen Xu, Jinming Zhang, Hua Hua Chang

Research output: Contribution to journalArticlepeer-review


The adaptive learning problem concerns how to create an individualized learning plan (also referred to as a learning policy) that chooses the most appropriate learning materials based on a learner’s latent traits. In this article, we study an important yet less-addressed adaptive learning problem—one that assumes continuous latent traits. Specifically, we formulate the adaptive learning problem as a Markov decision process. We assume latent traits to be continuous with an unknown transition model and apply a model-free deep reinforcement learning algorithm—the deep Q-learning algorithm—that can effectively find the optimal learning policy from data on learners’ learning process without knowing the actual transition model of the learners’ continuous latent traits. To efficiently utilize available data, we also develop a transition model estimator that emulates the learner’s learning process using neural networks. The transition model estimator can be used in the deep Q-learning algorithm so that it can more efficiently discover the optimal learning policy for a learner. Numerical simulation studies verify that the proposed algorithm is very efficient in finding a good learning policy. Especially with the aid of a transition model estimator, it can find the optimal learning policy after training using a small number of learners.

Original languageEnglish (US)
Pages (from-to)220-243
Number of pages24
JournalJournal of Educational and Behavioral Statistics
Issue number2
StatePublished - Apr 2023


  • Markov decision process
  • adaptive learning system
  • deep Q-learning
  • deep reinforcement learning
  • model free
  • neural networks
  • transition model estimator

ASJC Scopus subject areas

  • Education
  • Social Sciences (miscellaneous)


Dive into the research topics of 'Deep Reinforcement Learning for Adaptive Learning Systems'. Together they form a unique fingerprint.

Cite this