TY - CONF
T1 - MACER
T2 - 8th International Conference on Learning Representations, ICLR 2020
AU - Zhai, Runtian
AU - Dan, Chen
AU - He, Di
AU - Zhang, Huan
AU - Gong, Boqing
AU - Ravikumar, Pradeep
AU - Hsieh, Cho Jui
AU - Wang, Liwei
N1 - Funding Information:
We thank Tianle Cai for helpful discussions and suggestions. This work was done when Runtian Zhai was visiting UCLA under the Top-Notch Undergraduate Program of Peking University school of EECS. Chen Dan and Pradeep Ravikumar acknowledge the support of Rakuten Inc., and NSF via IIS1909816. Huan Zhang and Cho-Jui Hsieh acknowledge the support of NSF via IIS1719097. Liwei Wang acknowledges the support of Beijing Academy of Artificial Intelligence.
Publisher Copyright:
© 2020 8th International Conference on Learning Representations, ICLR 2020. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses. Recent work (Cohen et al., 2019) shows that randomized smoothing can be used to provide a certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER). The attack-free characteristic makes MACER faster to train and easier to optimize. In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN. For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radii. Our code is available at https://github.com/RuntianZ/macer.
AB - Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses. Recent work (Cohen et al., 2019) shows that randomized smoothing can be used to provide a certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER). The attack-free characteristic makes MACER faster to train and easier to optimize. In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN. For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radii. Our code is available at https://github.com/RuntianZ/macer.
UR - http://www.scopus.com/inward/record.url?scp=85150664647&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85150664647&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85150664647
Y2 - 30 April 2020
ER -