Asymptotics of Reinforcement Learning with Neural Networks

Justin Sirignano, Konstantinos Spiliopoulos

Research output: Contribution to journalArticlepeer-review

Abstract

We prove that a single-layer neural network trained with the Q-learning algorithm converges in distribution to a random ordinary differential equation as the size of the model and the number of training steps become large. Analysis of the limit differential equation shows that it has a unique stationary solution that is the solution of the Bellman equation, thus giving the optimal control for the problem. In addition, we study the convergence of the limit differential equation to the stationary solution. As a by-product of our analysis, we obtain the limiting behavior of single-layer neural networks when trained on independent and identically distributed data with stochastic gradient descent under the widely used Xavier initialization.

Original languageEnglish (US)
Pages (from-to)2-29
Number of pages28
JournalStochastic Systems
Volume12
Issue number1
DOIs
StatePublished - Mar 2022

Keywords

  • Q-learning
  • deep reinforcement learning
  • neural networks
  • reinforcement learning
  • weak convergence

ASJC Scopus subject areas

  • Statistics, Probability and Uncertainty
  • Management Science and Operations Research
  • Modeling and Simulation
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Asymptotics of Reinforcement Learning with Neural Networks'. Together they form a unique fingerprint.

Cite this