Learning invariants using decision trees and implication counterexamples

Pranav Garg, Daniel Neider, P. Madhusudan, Dan Roth

Research output: Contribution to journalArticlepeer-review

Abstract

Inductive invariants can be robustly synthesized using a learning model where the teacher is a program verifier who instructs the learner through concrete program configurations, classified as positive, negative, and implications. We propose the first learning algorithms in this model with implication counter-examples that are based on machine learning techniques. In particular, we extend classical decision-tree learning algorithms in machine learning to handle implication samples, building new scalable ways to construct small decision trees using statistical measures. We also develop a decision-tree learning algorithm in this model that is guaranteed to converge to the right concept (invariant) if one exists.We implement the learners and an appropriate teacher, and show that the resulting invariant synthesis is efficient and convergent for a large suite of programs.

Original languageEnglish (US)
Pages (from-to)499-512
Number of pages14
JournalACM SIGPLAN Notices
Volume51
Issue number1
DOIs
StatePublished - Apr 8 2016

Keywords

  • Decision trees
  • ICE learning
  • Invariant synthesis
  • Machine learning

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Learning invariants using decision trees and implication counterexamples'. Together they form a unique fingerprint.

Cite this