Robustness Certification with Refinement

Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present a novel approach for the certification of neural networks against adversarial perturbations which combines scalable overapproximation methods with precise (mixed integer) linear programming. This results in significantly better precision than state-of-the-art verifiers on challenging feedforward and convolutional neural networks with piecewise linear activation functions.
Original languageEnglish (US)
Title of host publicationInternational Conference on Learning Representations
StatePublished - 2019
Externally publishedYes

Keywords

  • Robustness certification
  • Verification of Neural Networks
  • MILP Solvers
  • Abstract Interpretation
  • Adversarial Attacks

Fingerprint

Dive into the research topics of 'Robustness Certification with Refinement'. Together they form a unique fingerprint.

Cite this