TY - GEN
T1 - ML-Driven Malware that Targets AV Safety
AU - Jha, Saurabh
AU - Cui, Shengkun
AU - Banerjee, Subho
AU - Cyriac, James
AU - Tsai, Timothy
AU - Kalbarczyk, Zbigniew
AU - Iyer, Ravishankar K.
N1 - Funding Information:
ACKNOWLEDGMENTS This material is based upon work supported by the National Science Foundation (NSF) under Grant No. 15-35070 and CNS 18-16673. We thank our shepherd Kun Sun for insightful discussion and suggestions. We also thank K. Atchley, J. Applequist, Arjun Athreya, and Keywhan Chung for their insightful comments on the early drafts. We would also like to thank NVIDIA Corporation for equipment donation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF and NVIDIA.
PY - 2020/6
Y1 - 2020/6
N2 - Ensuring the safety of autonomous vehicles (AVs) is critical for their mass deployment and public adoption. However, security attacks that violate safety constraints and cause accidents are a significant deterrent to achieving public trust in AVs, and that hinders a vendor's ability to deploy AVs. Creating a security hazard that results in a severe safety compromise (for example, an accident) is compelling from an attacker's perspective. In this paper, we introduce an attack model, a method to deploy the attack in the form of smart malware, and an experimental evaluation of its impact on production-grade autonomous driving software. We find that determining the time interval during which to launch the attack is{ critically} important for causing safety hazards (such as collisions) with a high degree of success. For example, the smart malware caused 33X more forced emergency braking than random attacks did, and accidents in 52.6% of the driving simulations.
AB - Ensuring the safety of autonomous vehicles (AVs) is critical for their mass deployment and public adoption. However, security attacks that violate safety constraints and cause accidents are a significant deterrent to achieving public trust in AVs, and that hinders a vendor's ability to deploy AVs. Creating a security hazard that results in a severe safety compromise (for example, an accident) is compelling from an attacker's perspective. In this paper, we introduce an attack model, a method to deploy the attack in the form of smart malware, and an experimental evaluation of its impact on production-grade autonomous driving software. We find that determining the time interval during which to launch the attack is{ critically} important for causing safety hazards (such as collisions) with a high degree of success. For example, the smart malware caused 33X more forced emergency braking than random attacks did, and accidents in 52.6% of the driving simulations.
KW - Autonomous Vehicles
KW - Safety
KW - Security
UR - http://www.scopus.com/inward/record.url?scp=85090410277&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85090410277&partnerID=8YFLogxK
U2 - 10.1109/DSN48063.2020.00030
DO - 10.1109/DSN48063.2020.00030
M3 - Conference contribution
AN - SCOPUS:85090410277
T3 - Proceedings - 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2020
SP - 113
EP - 124
BT - Proceedings - 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2020
Y2 - 29 June 2020 through 2 July 2020
ER -