## Abstract

We present two related results about the learnability of disjunctive normal form (DNF) formulas. First we show that a common approach for learning arbitrary DNF formulas requires exponential time. We then contrast this with a polynomial time algorithm for learning “most” (rather than all) DNF formulas. A natural approach for learning boolean functions involves greedily collecting the prime implicants of the hidden function. In a seminal paper of learning theory, Valiant demonstrated the efficacy of this approach for learning monotone DNF, and suggested this approach for learning DNF. Here we show that no algorithm using such an approach can learn DNF in polynomial time. We show this by constructing a counterexample DNF formula which would force such an algorithm to take exponential time. This counterexample seems to capture much of what makes DNF hard to learn, and thus is useful to consider when evaluating the run-time of a proposed DNF learning algorithm. This hardness result, as well as other hardness results for learning DNF, relies on the construction of particular hard-to-learn formulas, formulas that appear to be relatively rare. This raises the question of whether most DNF formulas are learnable. For certain natural definitions of “most DNF formulas,” we answer this question affirmatively.

Original language | English (US) |
---|---|

Pages (from-to) | 183-208 |

Number of pages | 26 |

Journal | Machine Learning |

Volume | 19 |

Issue number | 3 |

DOIs | |

State | Published - Jun 1995 |

Externally published | Yes |

## Keywords

- DNF formulas
- PAC learning
- computational learning theory
- concept learning
- greedy heuristic

## ASJC Scopus subject areas

- Software
- Artificial Intelligence