Learning invariants using decision trees and implication counterexamples

Pranav Garg, Daniel Neider, P. Madhusudan, Dan Roth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Inductive invariants can be robustly synthesized using a learning model where the teacher is a program verifier who instructs the learner through concrete program configurations, classified as positive, negative, and implications. We propose the first learning algorithms in this model with implication counter-examples that are based on machine learning techniques. In particular, we extend classical decision-tree learning algorithms in machine learning to handle implication samples, building new scalable ways to construct small decision trees using statistical measures. We also develop a decision-tree learning algorithm in this model that is guaranteed to converge to the right concept (invariant) if one exists.We implement the learners and an appropriate teacher, and show that the resulting invariant synthesis is efficient and convergent for a large suite of programs.

Original languageEnglish (US)
Title of host publicationPOPL 2016 - Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages
EditorsRupak Majumdar, Rastislav Bodik
PublisherAssociation for Computing Machinery
Pages499-512
Number of pages14
ISBN (Electronic)9781450335492
DOIs
StatePublished - Jan 11 2016
Event43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2016 - St. Petersburg, United States
Duration: Jan 20 2016Jan 22 2016

Publication series

NameConference Record of the Annual ACM Symposium on Principles of Programming Languages
Volume20-22-January-2016
ISSN (Print)0730-8566

Other

Other43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2016
CountryUnited States
CitySt. Petersburg
Period1/20/161/22/16

Keywords

  • Decision trees
  • ICE learning
  • Invariant synthesis
  • Machine learning

ASJC Scopus subject areas

  • Software

Fingerprint Dive into the research topics of 'Learning invariants using decision trees and implication counterexamples'. Together they form a unique fingerprint.

  • Cite this

    Garg, P., Neider, D., Madhusudan, P., & Roth, D. (2016). Learning invariants using decision trees and implication counterexamples. In R. Majumdar, & R. Bodik (Eds.), POPL 2016 - Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (pp. 499-512). (Conference Record of the Annual ACM Symposium on Principles of Programming Languages; Vol. 20-22-January-2016). Association for Computing Machinery. https://doi.org/10.1145/2837614.2837664