TY - GEN
T1 - Perturbation Type Categorization for Multiple Adversarial Perturbation Robustness
AU - Maini, Pratyush
AU - Chen, Xinyun
AU - Li, Bo
AU - Song, Dawn
N1 - Publisher Copyright:
© 2022 Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022. All right reserved.
PY - 2022
Y1 - 2022
N2 - Recent works in adversarial robustness have proposed defenses to improve the robustness of a single model against the union of multiple perturbation types. However, these methods still suffer significant trade-offs compared to the ones specifically trained to be robust against a single perturbation type. In this work, we introduce the problem of categorizing adversarial examples based on their perturbation types. We first theoretically show on a toy task that adversarial examples of different perturbation types constitute different distributions-making it possible to distinguish them. We support these arguments with experimental validation on multiple ℓp attacks and common corruptions. Instead of training a single classifier, we propose PROTECTOR, a two-stage pipeline that first categorizes the perturbation type of the input, and then makes the final prediction using the classifier specifically trained against the predicted perturbation type. We theoretically show that at test time the adversary faces a natural trade-off between fooling the perturbation classifier and the succeeding classifier optimized with perturbation-specific adversarial training. This makes it challenging for an adversary to plant strong attacks against the whole pipeline. Experiments on MNIST and CIFAR-10 show that PROTECTOR outperforms prior adversarial training-based defenses by over 5% when tested against the union of ℓ1, ℓ2, ℓ∞ attacks. Additionally, our method extends to a more diverse attack suite, also showing large robustness gains against multiple ℓp, spatial and recolor attacks.
AB - Recent works in adversarial robustness have proposed defenses to improve the robustness of a single model against the union of multiple perturbation types. However, these methods still suffer significant trade-offs compared to the ones specifically trained to be robust against a single perturbation type. In this work, we introduce the problem of categorizing adversarial examples based on their perturbation types. We first theoretically show on a toy task that adversarial examples of different perturbation types constitute different distributions-making it possible to distinguish them. We support these arguments with experimental validation on multiple ℓp attacks and common corruptions. Instead of training a single classifier, we propose PROTECTOR, a two-stage pipeline that first categorizes the perturbation type of the input, and then makes the final prediction using the classifier specifically trained against the predicted perturbation type. We theoretically show that at test time the adversary faces a natural trade-off between fooling the perturbation classifier and the succeeding classifier optimized with perturbation-specific adversarial training. This makes it challenging for an adversary to plant strong attacks against the whole pipeline. Experiments on MNIST and CIFAR-10 show that PROTECTOR outperforms prior adversarial training-based defenses by over 5% when tested against the union of ℓ1, ℓ2, ℓ∞ attacks. Additionally, our method extends to a more diverse attack suite, also showing large robustness gains against multiple ℓp, spatial and recolor attacks.
UR - http://www.scopus.com/inward/record.url?scp=85139892587&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85139892587&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85139892587
T3 - Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022
SP - 1317
EP - 1327
BT - Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022
PB - Association For Uncertainty in Artificial Intelligence (AUAI)
T2 - 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022
Y2 - 1 August 2022 through 5 August 2022
ER -