Improving the Robustness of Reinforcement Learning Policies With L1Adaptive Control

Yikun Cheng, Pan Zhao, Fanxin Wang, Daniel J. Block, Naira Hovakimyan

Research output: Contribution to journalArticlepeer-review

Abstract

A reinforcement learning (RL) control policy could fail in a new/perturbed environment that is different from the training environment, due to the presence of dynamic variations. For controlling systems with continuous state and action spaces, we propose an add-on approach to robustifying a pre-trained RL policy by augmenting it with an ${\mathcal {L}_{1}}$ adaptive controller (${\mathcal {L}_{1}}$AC). Leveraging the capability of an ${\mathcal {L}_{1}}$AC for fast estimation and active compensation of dynamic variations, the proposed approach can improve the robustness of an RL policy which is trained either in a simulator or in the real world without consideration of a broad class of dynamic variations. Numerical and real-world experiments empirically demonstrate the efficacy of the proposed approach in robustifying RL policies trained using both model-free and model-based methods.

Original languageEnglish (US)
Pages (from-to)6574-6581
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume7
Issue number3
DOIs
StatePublished - Jul 1 2022

Keywords

  • Reinforcement learning
  • machine learning for robot control
  • robot safety
  • robust/adaptive control

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Improving the Robustness of Reinforcement Learning Policies With L1Adaptive Control'. Together they form a unique fingerprint.

Cite this