TY - JOUR
T1 - PRIMA
T2 - General and precise neural network certification via scalable convex hull approximations
AU - Müller, Mark Niklas
AU - Makarchuk, Gleb
AU - Singh, Gagandeep
AU - Püschel, Markus
AU - Vechev, Martin
N1 - Publisher Copyright:
© 2022 Owner/Author.
PY - 2022/1
Y1 - 2022/1
N2 - Formal verification of neural networks is critical for their safe adoption in real-world applications. However, designing a precise and scalable verifier which can handle different activation functions, realistic network architectures and relevant specifications remains an open and difficult challenge. In this paper, we take a major step forward in addressing this challenge and present a new verification framework, called PRIMA. PRIMA is both (i) general: it handles any non-linear activation function, and (ii) precise: it computes precise convex abstractions involving multiple neurons via novel convex hull approximation algorithms that leverage concepts from computational geometry. The algorithms have polynomial complexity, yield fewer constraints, and minimize precision loss. We evaluate the effectiveness of PRIMA on a variety of challenging tasks from prior work. Our results show that PRIMA is significantly more precise than the state-of-the-art, verifying robustness to input perturbations for up to 20%, 30%, and 34% more images than existing work on ReLU-, Sigmoid-, and Tanh-based networks, respectively. Further, PRIMA enables, for the first time, the precise verification of a realistic neural network for autonomous driving within a few minutes.
AB - Formal verification of neural networks is critical for their safe adoption in real-world applications. However, designing a precise and scalable verifier which can handle different activation functions, realistic network architectures and relevant specifications remains an open and difficult challenge. In this paper, we take a major step forward in addressing this challenge and present a new verification framework, called PRIMA. PRIMA is both (i) general: it handles any non-linear activation function, and (ii) precise: it computes precise convex abstractions involving multiple neurons via novel convex hull approximation algorithms that leverage concepts from computational geometry. The algorithms have polynomial complexity, yield fewer constraints, and minimize precision loss. We evaluate the effectiveness of PRIMA on a variety of challenging tasks from prior work. Our results show that PRIMA is significantly more precise than the state-of-the-art, verifying robustness to input perturbations for up to 20%, 30%, and 34% more images than existing work on ReLU-, Sigmoid-, and Tanh-based networks, respectively. Further, PRIMA enables, for the first time, the precise verification of a realistic neural network for autonomous driving within a few minutes.
KW - Abstract Interpretation
KW - Convexity
KW - Polyhedra
KW - Robustness
UR - http://www.scopus.com/inward/record.url?scp=85123213599&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85123213599&partnerID=8YFLogxK
U2 - 10.1145/3498704
DO - 10.1145/3498704
M3 - Article
AN - SCOPUS:85123213599
SN - 2475-1421
VL - 6
JO - Proceedings of the ACM on Programming Languages
JF - Proceedings of the ACM on Programming Languages
IS - POPL
M1 - 3498696
ER -