Doubly robust off-policy value evaluation for reinforcement learning

Nan Jiang, Lihong Li

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We study the problem of off-policy value evaluation in reinforcement learning (RL), where one aims to estimate the value of a new policy based on data collected by a different policy. This problem is often a critical step when applying RL to real-world problems. Despite its importance, existing general methods either have uncontrolled bias or suffer high variance. In this work, we extend the doubly robust estimator for bandits to sequential decision-making problems, which gets the best of both worlds: it is guaranteed to be unbiased and can have a much lower variance than the popular importance sampling estimators. We demonstrate the estimator's accuracy in several benchmark problems, and illustrate its use as a subroutine in safe policy improvement. We also provide theoretical results on the inherent hardness of the problem, and show that our estimator can match the lower bound in certain scenarios.

Original languageEnglish (US)
Title of host publication33rd International Conference on Machine Learning, ICML 2016
EditorsMaria Florina Balcan, Kilian Q. Weinberger
PublisherInternational Machine Learning Society (IMLS)
Pages1022-1035
Number of pages14
ISBN (Electronic)9781510829008
StatePublished - 2016
Externally publishedYes
Event33rd International Conference on Machine Learning, ICML 2016 - New York City, United States
Duration: Jun 19 2016Jun 24 2016

Publication series

Name33rd International Conference on Machine Learning, ICML 2016
Volume2

Other

Other33rd International Conference on Machine Learning, ICML 2016
CountryUnited States
CityNew York City
Period6/19/166/24/16

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Computer Networks and Communications

Fingerprint Dive into the research topics of 'Doubly robust off-policy value evaluation for reinforcement learning'. Together they form a unique fingerprint.

  • Cite this

    Jiang, N., & Li, L. (2016). Doubly robust off-policy value evaluation for reinforcement learning. In M. F. Balcan, & K. Q. Weinberger (Eds.), 33rd International Conference on Machine Learning, ICML 2016 (pp. 1022-1035). (33rd International Conference on Machine Learning, ICML 2016; Vol. 2). International Machine Learning Society (IMLS).