Gotta Catch'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks

Shawn Shan, Emily Wenger, Bolun Wang, Bo Li, Haitao Zheng, Ben Y. Zhao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Deep neural networks (DNN) are known to be vulnerable to adversarial attacks. Numerous efforts either try to patch weaknesses in trained models, or try to make it difficult or costly to compute adversarial examples that exploit them. In our work, we explore a new "honeypot"approach to protect DNN models. We intentionally inject trapdoors, honeypot weaknesses in the classification manifold that attract attackers searching for adversarial examples. Attackers' optimization algorithms gravitate towards trapdoors, leading them to produce attacks similar to trapdoors in the feature space. Our defense then identifies attacks by comparing neuron activation signatures of inputs to those of trapdoors. In this paper, we introduce trapdoors and describe an implementation of a trapdoor-enabled defense. First, we analytically prove that trapdoors shape the computation of adversarial attacks so that attack inputs will have feature representations very similar to those of trapdoors. Second, we experimentally show that trapdoor-protected models can detect, with high accuracy, adversarial examples generated by state-of-the-art attacks (PGD, optimization-based CW, Elastic Net, BPDA), with negligible impact on normal classification. These results generalize across classification domains, including image, facial, and traffic-sign recognition. We also present significant results measuring trapdoors' robustness against customized adaptive attacks (countermeasures).

Original languageEnglish (US)
Title of host publicationCCS 2020 - Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery
Pages67-83
Number of pages17
ISBN (Electronic)9781450370899
DOIs
StatePublished - Oct 30 2020
Event27th ACM SIGSAC Conference on Computer and Communications Security, CCS 2020 - Virtual, Online, United States
Duration: Nov 9 2020Nov 13 2020

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Conference

Conference27th ACM SIGSAC Conference on Computer and Communications Security, CCS 2020
Country/TerritoryUnited States
CityVirtual, Online
Period11/9/2011/13/20

Keywords

  • adversarial examples
  • honeypots
  • neural networks

ASJC Scopus subject areas

  • Software
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Gotta Catch'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks'. Together they form a unique fingerprint.

Cite this