Abstract

In this paper, we present a new approach to transfer in Reinforcement Learning (RL) for cross-domain tasks. Unlike, available transfer approaches, where target task learning is accelerated through initialized learning from source, we propose to adapt and reuse the optimal source policy directly in the related domains. We show the optimal policy from a related source task can be near optimal in target domain provided an adaptive policy accounts for the model error between target and the projected source. A significant advantage of the proposed policy augmentation is in generalizing the policies across related domains without having to re-Iearn the new tasks. We demonstrate that, this architecture leads to better sample efficiency in the transfer, reducing sample complexity of target task learning to target apprentice learning.

Original languageEnglish (US)
Title of host publication2018 IEEE International Conference on Robotics and Automation, ICRA 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages7525-7532
Number of pages8
ISBN (Electronic)9781538630815
DOIs
StatePublished - Sep 10 2018
Event2018 IEEE International Conference on Robotics and Automation, ICRA 2018 - Brisbane, Australia
Duration: May 21 2018May 25 2018

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
ISSN (Print)1050-4729

Conference

Conference2018 IEEE International Conference on Robotics and Automation, ICRA 2018
CountryAustralia
CityBrisbane
Period5/21/185/25/18

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Cross-Domain Transfer in Reinforcement Learning Using Target Apprentice'. Together they form a unique fingerprint.

Cite this