TY - GEN
T1 - Gotta Catch'Em All
T2 - 27th ACM SIGSAC Conference on Computer and Communications Security, CCS 2020
AU - Shan, Shawn
AU - Wenger, Emily
AU - Wang, Bolun
AU - Li, Bo
AU - Zheng, Haitao
AU - Zhao, Ben Y.
N1 - Publisher Copyright:
© 2020 ACM.
PY - 2020/10/30
Y1 - 2020/10/30
N2 - Deep neural networks (DNN) are known to be vulnerable to adversarial attacks. Numerous efforts either try to patch weaknesses in trained models, or try to make it difficult or costly to compute adversarial examples that exploit them. In our work, we explore a new "honeypot"approach to protect DNN models. We intentionally inject trapdoors, honeypot weaknesses in the classification manifold that attract attackers searching for adversarial examples. Attackers' optimization algorithms gravitate towards trapdoors, leading them to produce attacks similar to trapdoors in the feature space. Our defense then identifies attacks by comparing neuron activation signatures of inputs to those of trapdoors. In this paper, we introduce trapdoors and describe an implementation of a trapdoor-enabled defense. First, we analytically prove that trapdoors shape the computation of adversarial attacks so that attack inputs will have feature representations very similar to those of trapdoors. Second, we experimentally show that trapdoor-protected models can detect, with high accuracy, adversarial examples generated by state-of-the-art attacks (PGD, optimization-based CW, Elastic Net, BPDA), with negligible impact on normal classification. These results generalize across classification domains, including image, facial, and traffic-sign recognition. We also present significant results measuring trapdoors' robustness against customized adaptive attacks (countermeasures).
AB - Deep neural networks (DNN) are known to be vulnerable to adversarial attacks. Numerous efforts either try to patch weaknesses in trained models, or try to make it difficult or costly to compute adversarial examples that exploit them. In our work, we explore a new "honeypot"approach to protect DNN models. We intentionally inject trapdoors, honeypot weaknesses in the classification manifold that attract attackers searching for adversarial examples. Attackers' optimization algorithms gravitate towards trapdoors, leading them to produce attacks similar to trapdoors in the feature space. Our defense then identifies attacks by comparing neuron activation signatures of inputs to those of trapdoors. In this paper, we introduce trapdoors and describe an implementation of a trapdoor-enabled defense. First, we analytically prove that trapdoors shape the computation of adversarial attacks so that attack inputs will have feature representations very similar to those of trapdoors. Second, we experimentally show that trapdoor-protected models can detect, with high accuracy, adversarial examples generated by state-of-the-art attacks (PGD, optimization-based CW, Elastic Net, BPDA), with negligible impact on normal classification. These results generalize across classification domains, including image, facial, and traffic-sign recognition. We also present significant results measuring trapdoors' robustness against customized adaptive attacks (countermeasures).
KW - adversarial examples
KW - honeypots
KW - neural networks
UR - http://www.scopus.com/inward/record.url?scp=85095312253&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85095312253&partnerID=8YFLogxK
U2 - 10.1145/3372297.3417231
DO - 10.1145/3372297.3417231
M3 - Conference contribution
AN - SCOPUS:85095312253
T3 - Proceedings of the ACM Conference on Computer and Communications Security
SP - 67
EP - 83
BT - CCS 2020 - Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security
PB - Association for Computing Machinery
Y2 - 9 November 2020 through 13 November 2020
ER -