Off-policy reinforcement learning with Gaussian processes

Girish Chowdhary, Miao Liu, Robert Grande, Thomas Walsh, Jonathan How, Lawrence Carin

Research output: Contribution to journalArticlepeer-review

Abstract

An off-policy Bayesian nonparameteric approximate reinforcement learning framework, termed as GPQ, that employs a Gaussian processes (GP) model of the value (Q) function is presented in both the batch and online settings. Sufficient conditions on GP hyperparameter selection are established to guarantee convergence of off-policy GPQ in the batch setting, and theoretical and practical extensions are provided for the online case. Empirical results demonstrate GPQ has competitive learning speed in addition to its convergence guarantees and its ability to automatically choose its own bases locations.

Original languageEnglish (US)
Article number7004680
Pages (from-to)227-238
Number of pages12
JournalIEEE/CAA Journal of Automatica Sinica
Volume1
Issue number3
DOIs
StatePublished - Jul 1 2014
Externally publishedYes

Keywords

  • Bayesian nonparametric
  • Gaussian processes
  • Reinforcement learning
  • off-policy learning

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Information Systems
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Off-policy reinforcement learning with Gaussian processes'. Together they form a unique fingerprint.

Cite this