Abstraction selection in model-based reinforcement learning

Nan Jiang, Alex Kulesza, Satinder Singh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

State abstractions are often used to reduce the complexity of model-based reinforcement learning when only limited quantities of data are available. However, choosing the appropriate level of abstraction is an important problem in practice. Existing approaches have theoretical guarantees only under strong assumptions on the domain or asymptotically large amounts of data, but in this paper we propose a simple algorithm based on statistical hypothesis testing that comes with a finite-sample guarantee under assumptions on candidate abstractions. Our algorithm trades off the low approximation error of finer abstractions against the low estimation error of coarser abstractions, resulting in a loss bound that depends only on the quality of the best available abstraction and is polynomial in planning horizon.

Original languageEnglish (US)
Title of host publication32nd International Conference on Machine Learning, ICML 2015
EditorsFrancis Bach, David Blei
PublisherInternational Machine Learning Society (IMLS)
Pages179-188
Number of pages10
ISBN (Electronic)9781510810587
StatePublished - Jan 1 2015
Externally publishedYes
Event32nd International Conference on Machine Learning, ICML 2015 - Lile, France
Duration: Jul 6 2015Jul 11 2015

Publication series

Name32nd International Conference on Machine Learning, ICML 2015
Volume1

Other

Other32nd International Conference on Machine Learning, ICML 2015
CountryFrance
CityLile
Period7/6/157/11/15

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Science Applications

Fingerprint Dive into the research topics of 'Abstraction selection in model-based reinforcement learning'. Together they form a unique fingerprint.

Cite this