TY - GEN
T1 - Adversarial robustness vs. model compression, or both?
AU - Ye, Shaokai
AU - Xu, Kaidi
AU - Liu, Sijia
AU - Cheng, Hao
AU - Lambrechts, Jan Henrik
AU - Zhang, Huan
AU - Zhou, Aojun
AU - Ma, Kaisheng
AU - Wang, Yanzhi
AU - Lin, Xue
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - It is well known that deep neural networks (DNNs) are vulnerable to adversarial attacks, which are implemented by adding crafted perturbations onto benign examples. Min-max robust optimization based adversarial training can provide a notion of security against adversarial attacks. However, adversarial robustness requires a significantly larger capacity of the network than that for the natural training with only benign examples. This paper proposes a framework of concurrent adversarial training and weight pruning that enables model compression while still preserving the adversarial robustness and essentially tackles the dilemma of adversarial training. Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting; training a small model from scratch even with inherited initialization from the large model cannot achieve neither adversarial robustness nor high standard accuracy. Code is available at https://github.com/yeshaokai/Robustness-Aware-Pruning-ADMM.
AB - It is well known that deep neural networks (DNNs) are vulnerable to adversarial attacks, which are implemented by adding crafted perturbations onto benign examples. Min-max robust optimization based adversarial training can provide a notion of security against adversarial attacks. However, adversarial robustness requires a significantly larger capacity of the network than that for the natural training with only benign examples. This paper proposes a framework of concurrent adversarial training and weight pruning that enables model compression while still preserving the adversarial robustness and essentially tackles the dilemma of adversarial training. Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting; training a small model from scratch even with inherited initialization from the large model cannot achieve neither adversarial robustness nor high standard accuracy. Code is available at https://github.com/yeshaokai/Robustness-Aware-Pruning-ADMM.
UR - http://www.scopus.com/inward/record.url?scp=85081899910&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081899910&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2019.00020
DO - 10.1109/ICCV.2019.00020
M3 - Conference contribution
AN - SCOPUS:85081899910
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 111
EP - 120
BT - Proceedings - 2019 International Conference on Computer Vision, ICCV 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 17th IEEE/CVF International Conference on Computer Vision, ICCV 2019
Y2 - 27 October 2019 through 2 November 2019
ER -