On the Learnability of Disjunctive Normal Form Formulas

Howard Aizenstein, Leonard Pitt

Research output: Contribution to journalArticlepeer-review

Abstract

We present two related results about the learnability of disjunctive normal form (DNF) formulas. First we show that a common approach for learning arbitrary DNF formulas requires exponential time. We then contrast this with a polynomial time algorithm for learning “most” (rather than all) DNF formulas. A natural approach for learning boolean functions involves greedily collecting the prime implicants of the hidden function. In a seminal paper of learning theory, Valiant demonstrated the efficacy of this approach for learning monotone DNF, and suggested this approach for learning DNF. Here we show that no algorithm using such an approach can learn DNF in polynomial time. We show this by constructing a counterexample DNF formula which would force such an algorithm to take exponential time. This counterexample seems to capture much of what makes DNF hard to learn, and thus is useful to consider when evaluating the run-time of a proposed DNF learning algorithm. This hardness result, as well as other hardness results for learning DNF, relies on the construction of particular hard-to-learn formulas, formulas that appear to be relatively rare. This raises the question of whether most DNF formulas are learnable. For certain natural definitions of “most DNF formulas,” we answer this question affirmatively.

Original languageEnglish (US)
Pages (from-to)183-208
Number of pages26
JournalMachine Learning
Volume19
Issue number3
DOIs
StatePublished - Jun 1995
Externally publishedYes

Keywords

  • DNF formulas
  • PAC learning
  • computational learning theory
  • concept learning
  • greedy heuristic

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'On the Learnability of Disjunctive Normal Form Formulas'. Together they form a unique fingerprint.

Cite this