TY - GEN
T1 - SafetyNet
T2 - 16th IEEE International Conference on Computer Vision, ICCV 2017
AU - Lu, Jiajun
AU - Issaranon, Theerasit
AU - Forsyth, David
N1 - Funding Information:
This work is supported in part by ONR MURI Award N00014-16- 1-2007, in part by NSF under Grant No. NSF IIS- 1421521, and in part by a Google MURA award.
Publisher Copyright:
© 2017 IEEE.
PY - 2017/12/22
Y1 - 2017/12/22
N2 - We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.
AB - We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.
UR - http://www.scopus.com/inward/record.url?scp=85041927082&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85041927082&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2017.56
DO - 10.1109/ICCV.2017.56
M3 - Conference contribution
AN - SCOPUS:85041927082
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 446
EP - 454
BT - Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 22 October 2017 through 29 October 2017
ER -