TY - GEN
T1 - Characterizing Attacks on Deep Reinforcement Learning
AU - Pan, Xinlei
AU - Xiao, Chaowei
AU - He, Warren
AU - Yang, Shuang
AU - Peng, Jian
AU - Sun, Mingjie
AU - Liu, Mingyan
AU - Li, Bo
AU - Song, Dawn
N1 - Publisher Copyright:
© 2022 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved
PY - 2022
Y1 - 2022
N2 - Recent studies show that Deep Reinforcement Learning (DRL) models are vulnerable to adversarial attacks, which attack DRL models by adding small perturbations to the observations. However, some attacks assume full availability of the victim model, and some require a huge amount of computation, making them less feasible for real world applications. In this work, we make further explorations of the vulnerabilities of DRL by studying other aspects of attacks on DRL using realistic and efficient attacks. First, we adapt and propose efficient black-box attacks when we do not have access to DRL model parameters. Second, to address the high computational demands of existing attacks, we introduce efficient online sequential attacks that exploit temporal consistency across consecutive steps. Third, we explore the possibility of an attacker perturbing other aspects in the DRL setting, such as the environment dynamics. Finally, to account for imperfections in how an attacker would inject perturbations in the physical world, we devise a method for generating a robust physical perturbations to be printed. The attack is evaluated on a real-world robot under various conditions. We conduct extensive experiments both in simulation such as Atari games, robotics and autonomous driving, and on real-world robotics, to compare the effectiveness of the proposed attacks with baseline approaches. To the best of our knowledge, we are thefirst to apply adversarial attacks on DRL systems to physical robots.
AB - Recent studies show that Deep Reinforcement Learning (DRL) models are vulnerable to adversarial attacks, which attack DRL models by adding small perturbations to the observations. However, some attacks assume full availability of the victim model, and some require a huge amount of computation, making them less feasible for real world applications. In this work, we make further explorations of the vulnerabilities of DRL by studying other aspects of attacks on DRL using realistic and efficient attacks. First, we adapt and propose efficient black-box attacks when we do not have access to DRL model parameters. Second, to address the high computational demands of existing attacks, we introduce efficient online sequential attacks that exploit temporal consistency across consecutive steps. Third, we explore the possibility of an attacker perturbing other aspects in the DRL setting, such as the environment dynamics. Finally, to account for imperfections in how an attacker would inject perturbations in the physical world, we devise a method for generating a robust physical perturbations to be printed. The attack is evaluated on a real-world robot under various conditions. We conduct extensive experiments both in simulation such as Atari games, robotics and autonomous driving, and on real-world robotics, to compare the effectiveness of the proposed attacks with baseline approaches. To the best of our knowledge, we are thefirst to apply adversarial attacks on DRL systems to physical robots.
KW - Adversarial Machine Learning
KW - Reinforcement Learning
KW - Robotics
UR - http://www.scopus.com/inward/record.url?scp=85134324368&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85134324368&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85134324368
T3 - Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
SP - 1010
EP - 1018
BT - International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2022
PB - International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
T2 - 21st International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2022
Y2 - 9 May 2022 through 13 May 2022
ER -