FaShapley: Fast and Approximated Shapley Based Model Pruning Towards Certifiably Robust DNNs

Mintong Kang, Linyi Li, Bo Li

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Despite the great success achieved by deep neural networks (DNNs) recently, several concerns have been raised regarding their robustness against adversarial perturbations as well as large model size in resource-constrained environments. Recent studies on robust learning indicate that there is a tradeoff between robustness and model size. For instance, larger smoothed models would provide higher robustness certification. Recent works have tried to weaken such a tradeoff by training small models via optimized pruning. However, these methods usually do not directly take specific neuron properties such as their importance into account. In this paper, we focus on designing a quantitative criterion, neuron Shapley, to evaluate the neuron weight/filter importance within DNNs, leading to effective unstructured/structured pruning strategies to improve the certified robustness of the pruned models. However, directly computing Shapley value for neurons is of exponential computational complexity, and thus we propose a fast and approximated Shapley (FaShapley) method via gradient-based approximation and optimized sample-size. Theoretically, we analyze the desired properties (e.g, linearity and symmetry) and sample complexity of FaShapley. Empirically, we conduct extensive experiments on different datasets with both unstructured pruning and structured pruning. The results on several DNN architectures trained with different robust learning algorithms show that FaShapley achieves state-of-the-art certified robustness under different settings.

Original languageEnglish (US)
Title of host publicationProceedings - 2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages575-592
Number of pages18
ISBN (Electronic)9781665462990
DOIs
StatePublished - 2023
Event2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2023 - Raleigh, United States
Duration: Feb 8 2023Feb 10 2023

Publication series

NameProceedings - 2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2023

Conference

Conference2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2023
Country/TerritoryUnited States
CityRaleigh
Period2/8/232/10/23

Keywords

  • certified robustness
  • model pruning

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Safety, Risk, Reliability and Quality
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'FaShapley: Fast and Approximated Shapley Based Model Pruning Towards Certifiably Robust DNNs'. Together they form a unique fingerprint.

Cite this