Abstract
We prove that a single-layer neural network trained with the Q-learning algorithm converges in distribution to a random ordinary differential equation as the size of the model and the number of training steps become large. Analysis of the limit differential equation shows that it has a unique stationary solution that is the solution of the Bellman equation, thus giving the optimal control for the problem. In addition, we study the convergence of the limit differential equation to the stationary solution. As a by-product of our analysis, we obtain the limiting behavior of single-layer neural networks when trained on independent and identically distributed data with stochastic gradient descent under the widely used Xavier initialization.
Original language | English (US) |
---|---|
Pages (from-to) | 2-29 |
Number of pages | 28 |
Journal | Stochastic Systems |
Volume | 12 |
Issue number | 1 |
DOIs | |
State | Published - Mar 2022 |
Keywords
- Q-learning
- deep reinforcement learning
- neural networks
- reinforcement learning
- weak convergence
ASJC Scopus subject areas
- Statistics, Probability and Uncertainty
- Management Science and Operations Research
- Modeling and Simulation
- Statistics and Probability