SafetyNet: Detecting and Rejecting Adversarial Examples Robustly

Jiajun Lu, Theerasit Issaranon, David Alexander Forsyth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.

Original languageEnglish (US)
Title of host publicationProceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages446-454
Number of pages9
ISBN (Electronic)9781538610329
DOIs
StatePublished - Dec 22 2017
Event16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy
Duration: Oct 22 2017Oct 29 2017

Publication series

NameProceedings of the IEEE International Conference on Computer Vision
Volume2017-October
ISSN (Print)1550-5499

Other

Other16th IEEE International Conference on Computer Vision, ICCV 2017
CountryItaly
CityVenice
Period10/22/1710/29/17

Fingerprint

Processing

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Cite this

Lu, J., Issaranon, T., & Forsyth, D. A. (2017). SafetyNet: Detecting and Rejecting Adversarial Examples Robustly. In Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017 (pp. 446-454). [8237318] (Proceedings of the IEEE International Conference on Computer Vision; Vol. 2017-October). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICCV.2017.56

SafetyNet : Detecting and Rejecting Adversarial Examples Robustly. / Lu, Jiajun; Issaranon, Theerasit; Forsyth, David Alexander.

Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017. Institute of Electrical and Electronics Engineers Inc., 2017. p. 446-454 8237318 (Proceedings of the IEEE International Conference on Computer Vision; Vol. 2017-October).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lu, J, Issaranon, T & Forsyth, DA 2017, SafetyNet: Detecting and Rejecting Adversarial Examples Robustly. in Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017., 8237318, Proceedings of the IEEE International Conference on Computer Vision, vol. 2017-October, Institute of Electrical and Electronics Engineers Inc., pp. 446-454, 16th IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, 10/22/17. https://doi.org/10.1109/ICCV.2017.56
Lu J, Issaranon T, Forsyth DA. SafetyNet: Detecting and Rejecting Adversarial Examples Robustly. In Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017. Institute of Electrical and Electronics Engineers Inc. 2017. p. 446-454. 8237318. (Proceedings of the IEEE International Conference on Computer Vision). https://doi.org/10.1109/ICCV.2017.56
Lu, Jiajun ; Issaranon, Theerasit ; Forsyth, David Alexander. / SafetyNet : Detecting and Rejecting Adversarial Examples Robustly. Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 446-454 (Proceedings of the IEEE International Conference on Computer Vision).
@inproceedings{b724075f8347438dae185c534fc3bf85,
title = "SafetyNet: Detecting and Rejecting Adversarial Examples Robustly",
abstract = "We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.",
author = "Jiajun Lu and Theerasit Issaranon and Forsyth, {David Alexander}",
year = "2017",
month = "12",
day = "22",
doi = "10.1109/ICCV.2017.56",
language = "English (US)",
series = "Proceedings of the IEEE International Conference on Computer Vision",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "446--454",
booktitle = "Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017",
address = "United States",

}

TY - GEN

T1 - SafetyNet

T2 - Detecting and Rejecting Adversarial Examples Robustly

AU - Lu, Jiajun

AU - Issaranon, Theerasit

AU - Forsyth, David Alexander

PY - 2017/12/22

Y1 - 2017/12/22

N2 - We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.

AB - We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.

UR - http://www.scopus.com/inward/record.url?scp=85041927082&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85041927082&partnerID=8YFLogxK

U2 - 10.1109/ICCV.2017.56

DO - 10.1109/ICCV.2017.56

M3 - Conference contribution

AN - SCOPUS:85041927082

T3 - Proceedings of the IEEE International Conference on Computer Vision

SP - 446

EP - 454

BT - Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -