A Model Checking Based Approach to Detect Safety-Critical Adversarial Examples on Autonomous Driving Systems

Zhen Huang, Bo Li, De Hui Du, Qin Li

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The safety of autonomous driving systems (ADS) with machine learning (ML) components is threatened by adversarial examples. The mainstream defending technique against such threats concerns the adversarial examples that make the ML model fail. However, such an adversarial example does not necessarily cause safety problems for the entire ADS. Therefore a method for detecting the adversarial examples that will lead the ADS to unsafe states will be helpful to improve the defending technique. This paper proposes an approach to detect such safety-critical adversarial examples in typical autonomous driving scenarios based on the model checking technique. The scenario of autonomous driving and the semantic effect of adversarial attacks on object detection is specified with the Network of Timed Automata model. The safety properties of ADS are specified and verified through the UPPAAL model checker to show whether the adversarial examples lead to safety problems. The result from the model checking can reveal the critical time interval of adversarial attacks that will lead to an unsafe state for a given scenario. The approach is demonstrated on a popular adversarial attack algorithm in a typical autonomous driving scenario. Its effectiveness is shown through a series of simulations on the CARLA platform.

Original languageEnglish (US)
Title of host publicationTheoretical Aspects of Computing – ICTAC 2022 - 19th International Colloquium, Proceedings
EditorsHelmut Seidl, Zhiming Liu, Corina S. Pasareanu
PublisherSpringer
Pages238-254
Number of pages17
ISBN (Print)9783031177149
DOIs
StatePublished - 2022
Externally publishedYes
Event19th International Colloquium on Theoretical Aspects of Computing, ICTAC 2022 - Tbilisi, Georgia
Duration: Sep 27 2022Sep 29 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13572 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference19th International Colloquium on Theoretical Aspects of Computing, ICTAC 2022
Country/TerritoryGeorgia
CityTbilisi
Period9/27/229/29/22

Keywords

  • Adversarial examples
  • Autonomous driving
  • Model checking

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'A Model Checking Based Approach to Detect Safety-Critical Adversarial Examples on Autonomous Driving Systems'. Together they form a unique fingerprint.

Cite this