TY - GEN
T1 - Characterizing adversarial examples based on spatial consistency information for semantic segmentation
AU - Xiao, Chaowei
AU - Deng, Ruizhi
AU - Li, Bo
AU - Yu, Fisher
AU - Liu, Mingyan
AU - Song, Dawn
N1 - Acknowledgments. We thank Warren He, George Philipp, Ziwei Liu, Zhirong Wu, Shizhan Zhu and Xiaoxiao Li for their valuable discussions on this work. This work was supported in part by Berkeley DeepDrive, Compute Canada, NSERC and National Science Foundation under grants CNS-1422211, CNS-1616575, CNS-1739517, JD Grapevine plan, and by the DHS via contract number FA8750-18-2-0011.
PY - 2018
Y1 - 2018
N2 - Deep Neural Networks (DNNs) have been widely applied in various recognition tasks. However, recently DNNs have been shown to be vulnerable against adversarial examples, which can mislead DNNs to make arbitrary incorrect predictions. While adversarial examples are well studied in classification tasks, other learning problems may have different properties. For instance, semantic segmentation requires additional components such as dilated convolutions and multiscale processing. In this paper, we aim to characterize adversarial examples based on spatial context information in semantic segmentation. We observe that spatial consistency information can be potentially leveraged to detect adversarial examples robustly even when a strong adaptive attacker has access to the model and detection strategies. We also show that adversarial examples based on attacks considered within the paper barely transfer among models, even though transferability is common in classification. Our observations shed new light on developing adversarial attacks and defenses to better understand the vulnerabilities of DNNs.
AB - Deep Neural Networks (DNNs) have been widely applied in various recognition tasks. However, recently DNNs have been shown to be vulnerable against adversarial examples, which can mislead DNNs to make arbitrary incorrect predictions. While adversarial examples are well studied in classification tasks, other learning problems may have different properties. For instance, semantic segmentation requires additional components such as dilated convolutions and multiscale processing. In this paper, we aim to characterize adversarial examples based on spatial context information in semantic segmentation. We observe that spatial consistency information can be potentially leveraged to detect adversarial examples robustly even when a strong adaptive attacker has access to the model and detection strategies. We also show that adversarial examples based on attacks considered within the paper barely transfer among models, even though transferability is common in classification. Our observations shed new light on developing adversarial attacks and defenses to better understand the vulnerabilities of DNNs.
KW - Adversarial example
KW - Semantic segmentation
KW - Spatial consistency
UR - https://www.scopus.com/pages/publications/85055114623
UR - https://www.scopus.com/inward/citedby.url?scp=85055114623&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-01249-6_14
DO - 10.1007/978-3-030-01249-6_14
M3 - Conference contribution
AN - SCOPUS:85055114623
SN - 9783030012489
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 220
EP - 237
BT - Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings
A2 - Hebert, Martial
A2 - Ferrari, Vittorio
A2 - Sminchisescu, Cristian
A2 - Weiss, Yair
PB - Springer
T2 - 15th European Conference on Computer Vision, ECCV 2018
Y2 - 8 September 2018 through 14 September 2018
ER -