A tutorial on linear function approximators for dynamic programming and reinforcement learning

Alborz Geramifard, Thomas J. Walsh, Stefanie Tellex, Girish Chowdhary, Nicholas Roy, Jonathan P. How

Research output: Contribution to journalReview articlepeer-review

Abstract

A Markov Decision Process (MDP) is a natural framework for formulating sequential decision-making problems under uncertainty. In recent years, researchers have greatly advanced algorithms for learning and acting in MDPs. This article reviews such algorithms, beginning with well-known dynamic programming methods for solving MDPs such as policy iteration and value iteration, then describes approximate dynamic programming methods such as trajectory based value iteration, and finally moves to reinforcement learning methods such as Q-Learning, SARSA, and least-squares policy iteration. We describe algorithms in a unified framework, giving pseudocode together with memory and iteration complexity analysis for each. Empirical evaluations of these techniques with four representations across four domains, provide insight into how these algorithms perform with various feature sets in terms of running time and performance.

Original languageEnglish (US)
Pages (from-to)375-454
Number of pages80
JournalFoundations and Trends in Machine Learning
Volume6
Issue number4
DOIs
StatePublished - 2013
Externally publishedYes

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'A tutorial on linear function approximators for dynamic programming and reinforcement learning'. Together they form a unique fingerprint.

Cite this