TY - GEN
T1 - SemanticAdv
T2 - 16th European Conference on Computer Vision, ECCV 2020
AU - Qiu, Haonan
AU - Xiao, Chaowei
AU - Yang, Lei
AU - Yan, Xinchen
AU - Lee, Honglak
AU - Li, Bo
N1 - This work was supported in part by AWS Machine Learning Research Awards, National Science Foundation under grants CNS-1422211, CNS-1616575, CNS-1739517, and NSF CAREER Award IIS-1453651.
Acknowledgments. This work was supported in part by AWS Machine Learning Research Awards, National Science Foundation under grants CNS-1422211, CNS-1616575, CNS-1739517, and NSF CAREER Award IIS-1453651.
PY - 2020
Y1 - 2020
N2 - Recent studies have shown that DNNs are vulnerable to adversarial examples which are manipulated instances targeting to mislead DNNs to make incorrect predictions. Currently, most such adversarial examples try to guarantee “subtle perturbation” by limiting the Lp norm of the perturbation. In this paper, we propose SemanticAdv to generate a new type of semantically realistic adversarial examples via attribute-conditioned image editing. Compared to existing methods, our SemanticAdv enables fine-grained analysis and evaluation of DNNs with input variations in the attribute space. We conduct comprehensive experiments to show that our adversarial examples not only exhibit semantically meaningful appearances but also achieve high targeted attack success rates under both whitebox and blackbox settings. Moreover, we show that the existing pixel-based and attribute-based defense methods fail to defend against SemanticAdv. We demonstrate the applicability of SemanticAdv on both face recognition and general street-view images to show its generalization. We believe that our work can shed light on further understanding about vulnerabilities of DNNs as well as novel defense approaches. Our implementation is available at https://github.com/AI-secure/SemanticAdv.
AB - Recent studies have shown that DNNs are vulnerable to adversarial examples which are manipulated instances targeting to mislead DNNs to make incorrect predictions. Currently, most such adversarial examples try to guarantee “subtle perturbation” by limiting the Lp norm of the perturbation. In this paper, we propose SemanticAdv to generate a new type of semantically realistic adversarial examples via attribute-conditioned image editing. Compared to existing methods, our SemanticAdv enables fine-grained analysis and evaluation of DNNs with input variations in the attribute space. We conduct comprehensive experiments to show that our adversarial examples not only exhibit semantically meaningful appearances but also achieve high targeted attack success rates under both whitebox and blackbox settings. Moreover, we show that the existing pixel-based and attribute-based defense methods fail to defend against SemanticAdv. We demonstrate the applicability of SemanticAdv on both face recognition and general street-view images to show its generalization. We believe that our work can shed light on further understanding about vulnerabilities of DNNs as well as novel defense approaches. Our implementation is available at https://github.com/AI-secure/SemanticAdv.
UR - http://www.scopus.com/inward/record.url?scp=85097094597&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85097094597&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-58568-6_2
DO - 10.1007/978-3-030-58568-6_2
M3 - Conference contribution
AN - SCOPUS:85097094597
SN - 9783030585679
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 19
EP - 37
BT - Computer Vision – ECCV 2020 - 16th European Conference, 2020, Proceedings
A2 - Vedaldi, Andrea
A2 - Bischof, Horst
A2 - Brox, Thomas
A2 - Frahm, Jan-Michael
PB - Springer
Y2 - 23 August 2020 through 28 August 2020
ER -