Q* Approximation Schemes for Batch Reinforcement Learning: A Theoretical Comparison

Tengyang Xie, Nan Jiang

Research output: Contribution to journalConference articlepeer-review

Abstract

We prove performance guarantees of two algorithms for approximating Q? in batch reinforcement learning. Compared to classical iterative methods such as Fitted Q-Iteration-whose performance loss incurs quadratic dependence on horizon-these methods estimate (some forms of) the Bellman error and enjoy linear-in-horizon error propagation, a property established for the first time for algorithms that rely solely on batch data and output stationary policies. One of the algorithms uses a novel and explicit importance-weighting correction to overcome the infamous “double sampling” difficulty in Bellman error estimation, and does not use any squared losses. Our analyses reveal its distinct characteristics and potential advantages compared to classical algorithms.

Original languageEnglish (US)
Pages (from-to)550-559
Number of pages10
JournalProceedings of Machine Learning Research
Volume124
StatePublished - 2020
Event36th Conference on Uncertainty in Artificial Intelligence, UAI 2020 - Virtual, Online
Duration: Aug 3 2020Aug 6 2020

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Q* Approximation Schemes for Batch Reinforcement Learning: A Theoretical Comparison'. Together they form a unique fingerprint.

Cite this