Learning to play in a day: Faster deep reinforcement learning by optimality tightening

Frank S. He, Yang Liu, Alexander G. Schwing, Jian Peng

Research output: Contribution to conferencePaperpeer-review

Abstract

We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy.

Original languageEnglish (US)
StatePublished - 2017
Event5th International Conference on Learning Representations, ICLR 2017 - Toulon, France
Duration: Apr 24 2017Apr 26 2017

Conference

Conference5th International Conference on Learning Representations, ICLR 2017
CountryFrance
CityToulon
Period4/24/174/26/17

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Fingerprint Dive into the research topics of 'Learning to play in a day: Faster deep reinforcement learning by optimality tightening'. Together they form a unique fingerprint.

Cite this