The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation

Philip Amortila, Nan Jiang, Csaba Szepesvári

Research output: Contribution to journalConference articlepeer-review

Abstract

Theoretical guarantees in reinforcement learning (RL) are known to suffer multiplicative blow-up factors with respect to the misspecification error of function approximation. Yet, the nature of such approximation factors'especially their optimal form in a given learning problem'is poorly understood. In this paper we study this question in linear off-policy value function estimation, where many open questions remain. We study the approximation factor in a broad spectrum of settings, such as presence vs. absence of state aliasing and full vs. partial coverage of the state space. Our core results include instance-dependent upper bounds on the approximation factors with respect to both the weighted L2-norm (where the weighting is the offline state distribution) and the L norm. We show that these approximation factors are optimal (in an instance-dependent sense) for a number of these settings. In other cases, we show that the instance-dependent parameters which appear in the upper bounds are necessary, and that the finiteness of either alone cannot guarantee a finite approximation factor even in the limit of infinite data.

Original languageEnglish (US)
Pages (from-to)768-790
Number of pages23
JournalProceedings of Machine Learning Research
Volume202
StatePublished - 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: Jul 23 2023Jul 29 2023

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation'. Together they form a unique fingerprint.

Cite this