Abstract
We study time-inhomogeneous episodic reinforcement learning (RL) under general function approximation and sparse rewards. We design a new algorithm, Variance-weighted Optimistic QLearning (VOQL), based on Q-learning and bound its regret assuming closure under Bellman backups, and bounded Eluder dimension for the regression function class. As a special case, VOQL achieves Oe(d√TH + d6H5) regret over T episodes for a horizon H MDP under (ddimensional) linear function approximation, which is asymptotically optimal. Our algorithm incorporates weighted regression-based upper and lower bounds on the optimal value function to obtain this improved regret. The algorithm is computationally efficient given a regression oracle over the function class, making this the first computationally tractable and statistically optimal approach for linear MDPs.
Original language | English (US) |
---|---|
Pages (from-to) | 987-1063 |
Number of pages | 77 |
Journal | Proceedings of Machine Learning Research |
Volume | 195 |
State | Published - 2023 |
Externally published | Yes |
Event | 36th Annual Conference on Learning Theory, COLT 2023 - Bangalore, India Duration: Jul 12 2023 → Jul 15 2023 |
Keywords
- eluder dimension
- model-free algorithms
- nonlinear function approximation
- Reinforcement learning
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability