Robust Deep Reinforcement Learning with adversarial attacks

Anay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan, Girish Chowdhary

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper proposes adversarial attacks for Reinforcement Learning (RL). These attacks are then leveraged during training to improve the robustness of RL within robust control framework. We show that this adversarial training of DRL algorithms like Deep Double Q learning and Deep Deterministic Policy Gradients leads to significant increase in robustness to parameter variations for RL benchmarks such as Mountain Car and Hopper environment. Full paper is available at (https://arxiv.org/abs/1712.03632) [7].

Original languageEnglish (US)
Title of host publication17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Pages2040-2042
Number of pages3
ISBN (Print)9781510868083
StatePublished - 2018
Event17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018 - Stockholm, Sweden
Duration: Jul 10 2018Jul 15 2018

Publication series

NameProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Volume3
ISSN (Print)1548-8403
ISSN (Electronic)1558-2914

Other

Other17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018
Country/TerritorySweden
CityStockholm
Period7/10/187/15/18

Keywords

  • Adversarial machine learning
  • Deep learning
  • Reinforcement Learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Robust Deep Reinforcement Learning with adversarial attacks'. Together they form a unique fingerprint.

Cite this