VOQL: Towards Optimal Regret in Model-free RL with Nonlinear Function Approximation

Alekh Agarwal, Yujia Jin, Tong Zhang

Research output: Contribution to journalConference articlepeer-review

Abstract

We study time-inhomogeneous episodic reinforcement learning (RL) under general function approximation and sparse rewards. We design a new algorithm, Variance-weighted Optimistic QLearning (VOQL), based on Q-learning and bound its regret assuming closure under Bellman backups, and bounded Eluder dimension for the regression function class. As a special case, VOQL achieves Oe(d√TH + d6H5) regret over T episodes for a horizon H MDP under (ddimensional) linear function approximation, which is asymptotically optimal. Our algorithm incorporates weighted regression-based upper and lower bounds on the optimal value function to obtain this improved regret. The algorithm is computationally efficient given a regression oracle over the function class, making this the first computationally tractable and statistically optimal approach for linear MDPs.

Original languageEnglish (US)
Pages (from-to)987-1063
Number of pages77
JournalProceedings of Machine Learning Research
Volume195
StatePublished - 2023
Externally publishedYes
Event36th Annual Conference on Learning Theory, COLT 2023 - Bangalore, India
Duration: Jul 12 2023Jul 15 2023

Keywords

  • eluder dimension
  • model-free algorithms
  • nonlinear function approximation
  • Reinforcement learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'VOQL: Towards Optimal Regret in Model-free RL with Nonlinear Function Approximation'. Together they form a unique fingerprint.

Cite this