Input-Relational Verification of Deep Neural Networks

Research output: Contribution to journalArticlepeer-review

Abstract

We consider the verification of input-relational properties defined over deep neural networks (DNNs) such as robustness against universal adversarial perturbations, monotonicity, etc. Precise verification of these properties requires reasoning about multiple executions of the same DNN. We introduce a novel concept of difference tracking to compute the difference between the outputs of two executions of the same DNN at all layers. We design a new abstract domain, DiffPoly for efficient difference tracking that can scale large DNNs. DiffPoly is equipped with custom abstract transformers for common activation functions (ReLU, Tanh, Sigmoid, etc.) and affine layers and can create precise linear cross-execution constraints. We implement an input-relational verifier for DNNs called RaVeN which uses DiffPoly and linear program formulations to handle a wide range of input-relational properties. Our experimental results on challenging benchmarks show that by leveraging precise linear constraints defined over multiple executions of the DNN, RaVeN gains substantial precision over baselines on a wide range of datasets, networks, and input-relational properties.

Original languageEnglish (US)
Article number147
JournalProceedings of the ACM on Programming Languages
Volume8
Early online dateJun 20 2024
DOIs
StatePublished - Jun 20 2024

Keywords

  • Deep Learning
  • Interpretation
  • Relational Verification

ASJC Scopus subject areas

  • Software
  • Safety, Risk, Reliability and Quality

Fingerprint

Dive into the research topics of 'Input-Relational Verification of Deep Neural Networks'. Together they form a unique fingerprint.

Cite this