Abstract
We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.
Original language | English (US) |
---|---|
Title of host publication | Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 446-454 |
Number of pages | 9 |
ISBN (Electronic) | 9781538610329 |
DOIs | |
State | Published - Dec 22 2017 |
Event | 16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy Duration: Oct 22 2017 → Oct 29 2017 |
Publication series
Name | Proceedings of the IEEE International Conference on Computer Vision |
---|---|
Volume | 2017-October |
ISSN (Print) | 1550-5499 |
Other
Other | 16th IEEE International Conference on Computer Vision, ICCV 2017 |
---|---|
Country | Italy |
City | Venice |
Period | 10/22/17 → 10/29/17 |
Fingerprint
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition
Cite this
SafetyNet : Detecting and Rejecting Adversarial Examples Robustly. / Lu, Jiajun; Issaranon, Theerasit; Forsyth, David Alexander.
Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017. Institute of Electrical and Electronics Engineers Inc., 2017. p. 446-454 8237318 (Proceedings of the IEEE International Conference on Computer Vision; Vol. 2017-October).Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
}
TY - GEN
T1 - SafetyNet
T2 - Detecting and Rejecting Adversarial Examples Robustly
AU - Lu, Jiajun
AU - Issaranon, Theerasit
AU - Forsyth, David Alexander
PY - 2017/12/22
Y1 - 2017/12/22
N2 - We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.
AB - We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.
UR - http://www.scopus.com/inward/record.url?scp=85041927082&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85041927082&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2017.56
DO - 10.1109/ICCV.2017.56
M3 - Conference contribution
AN - SCOPUS:85041927082
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 446
EP - 454
BT - Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017
PB - Institute of Electrical and Electronics Engineers Inc.
ER -