TY - GEN
T1 - FaShapley
T2 - 2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2023
AU - Kang, Mintong
AU - Li, Linyi
AU - Li, Bo
N1 - Funding Information:
VI. CONCLUSION In this paper, we design an efficient criterion based on neuron properties to weaken the tradeoff between the certified robustness and the model size. We propose a fast and approximated Shapley method (FaShapley) via gradient-based approximation and sample-size optimization. The method inherits desired properties from Shapley value and overcomes the challenge of expensive computational costs of Shapley approximation. Through theoretical analysis and extensive evaluation, we demonstrate that FaShapley is both computationally efficient and empirically effective to achieve high certified robustness for pruned DNNs. Acknowledgements. This work is partially supported by NSF grant No.1910100, NSF CNS No.2046726, a C3.ai DTI Award, and the Alfred P. Sloan Foundation.
Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Despite the great success achieved by deep neural networks (DNNs) recently, several concerns have been raised regarding their robustness against adversarial perturbations as well as large model size in resource-constrained environments. Recent studies on robust learning indicate that there is a tradeoff between robustness and model size. For instance, larger smoothed models would provide higher robustness certification. Recent works have tried to weaken such a tradeoff by training small models via optimized pruning. However, these methods usually do not directly take specific neuron properties such as their importance into account. In this paper, we focus on designing a quantitative criterion, neuron Shapley, to evaluate the neuron weight/filter importance within DNNs, leading to effective unstructured/structured pruning strategies to improve the certified robustness of the pruned models. However, directly computing Shapley value for neurons is of exponential computational complexity, and thus we propose a fast and approximated Shapley (FaShapley) method via gradient-based approximation and optimized sample-size. Theoretically, we analyze the desired properties (e.g, linearity and symmetry) and sample complexity of FaShapley. Empirically, we conduct extensive experiments on different datasets with both unstructured pruning and structured pruning. The results on several DNN architectures trained with different robust learning algorithms show that FaShapley achieves state-of-the-art certified robustness under different settings.
AB - Despite the great success achieved by deep neural networks (DNNs) recently, several concerns have been raised regarding their robustness against adversarial perturbations as well as large model size in resource-constrained environments. Recent studies on robust learning indicate that there is a tradeoff between robustness and model size. For instance, larger smoothed models would provide higher robustness certification. Recent works have tried to weaken such a tradeoff by training small models via optimized pruning. However, these methods usually do not directly take specific neuron properties such as their importance into account. In this paper, we focus on designing a quantitative criterion, neuron Shapley, to evaluate the neuron weight/filter importance within DNNs, leading to effective unstructured/structured pruning strategies to improve the certified robustness of the pruned models. However, directly computing Shapley value for neurons is of exponential computational complexity, and thus we propose a fast and approximated Shapley (FaShapley) method via gradient-based approximation and optimized sample-size. Theoretically, we analyze the desired properties (e.g, linearity and symmetry) and sample complexity of FaShapley. Empirically, we conduct extensive experiments on different datasets with both unstructured pruning and structured pruning. The results on several DNN architectures trained with different robust learning algorithms show that FaShapley achieves state-of-the-art certified robustness under different settings.
KW - certified robustness
KW - model pruning
UR - http://www.scopus.com/inward/record.url?scp=85156142145&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85156142145&partnerID=8YFLogxK
U2 - 10.1109/SaTML54575.2023.00044
DO - 10.1109/SaTML54575.2023.00044
M3 - Conference contribution
AN - SCOPUS:85156142145
T3 - Proceedings - 2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2023
SP - 575
EP - 592
BT - Proceedings - 2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 8 February 2023 through 10 February 2023
ER -