TY - GEN
T1 - Susceptibility of Autonomous Driving Agents to Learning-Based Action-Space Attacks
AU - Wu, Yuting
AU - Lou, Xin
AU - Zhou, Pengfei
AU - Tan, Rui
AU - Kalbarczyk, Zbigniew T.
AU - Iyer, Ravishankar K.
N1 - 1This project is supported by the National Research Foundation, Singapore, and the National University of Singapore through its National Satellite of Excellence in Trustworthy Software Systems (NSOE-TSS) office under the Trustworthy Computing for Secure Smart Nation Grant (TCSSNG) award no. NSOE-TSS2020-01, and in part by the National Research Foundation, Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program.
PY - 2023
Y1 - 2023
N2 - Intelligent vehicles with increasing complexity face cybersecurity threats. This paper studies action-space attacks on autonomous driving agents that make decisions using either a traditional modular processing pipeline or the recently proposed end-to-end driving model obtained via deep reinforcement learning (DRL). Such attacks alter the actuation signal and pose direct risks to the vehicle's state. We formulate the attack construction as a DRL problem based on the input from either an extra camera or inertial measurement unit deployed. The attacks are designed to lurk until a safety-critical moment arises and cause a side collision upon activation. We analyze the behavioral differences between two driving agents when subjected to action-space attacks and demonstrate the superior resilience of the modular processing pipeline. We further investigate the performance and limitations of two enhancement methods, i.e., adversarial training through fine-tuning and progressive neural networks. The result offers valuable insights into vehicle safety from the viewpoints of both the assailant and the defender and informs the future design of autonomous driving systems.
AB - Intelligent vehicles with increasing complexity face cybersecurity threats. This paper studies action-space attacks on autonomous driving agents that make decisions using either a traditional modular processing pipeline or the recently proposed end-to-end driving model obtained via deep reinforcement learning (DRL). Such attacks alter the actuation signal and pose direct risks to the vehicle's state. We formulate the attack construction as a DRL problem based on the input from either an extra camera or inertial measurement unit deployed. The attacks are designed to lurk until a safety-critical moment arises and cause a side collision upon activation. We analyze the behavioral differences between two driving agents when subjected to action-space attacks and demonstrate the superior resilience of the modular processing pipeline. We further investigate the performance and limitations of two enhancement methods, i.e., adversarial training through fine-tuning and progressive neural networks. The result offers valuable insights into vehicle safety from the viewpoints of both the assailant and the defender and informs the future design of autonomous driving systems.
KW - Action-space attack
KW - Autonomous driving
KW - Cybersecurity
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85169460146&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85169460146&partnerID=8YFLogxK
U2 - 10.1109/DSN-W58399.2023.00034
DO - 10.1109/DSN-W58399.2023.00034
M3 - Conference contribution
AN - SCOPUS:85169460146
T3 - Proceedings - 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops Volume, DSN-W 2023
SP - 76
EP - 83
BT - Proceedings - 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops Volume, DSN-W 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops Volume, DSN-W 2023
Y2 - 27 June 2023 through 30 June 2023
ER -