On learning visual concepts and DNF formulae

Eyal Kushilevitz, Dan Roth

Research output: Contribution to journalArticlepeer-review


We consider the problem of learning DNF formulae in the mistake-bound and the PAC models. We develop a new approach, which is called polynomial explainability, that is shown to be useful for learning some new subclasses of DNF (and CNF) formulae that were not known to be learnable before. Unlike previous learnability results for DNF (and CNF) formulae, these subclasses are not limited in the number of terms or in the number of variables per term; yet, they contain the subclasses of κ-DNF and κ-term-DNF (and the corresponding classes of CNF) as special cases. We apply our DNF results to the problem of learning visual concepts and obtain learning algorithms for several natural subclasses of visual concepts that appear to have no natural boolean counterpart. On the other hand, we show that learning some other natural subclasses of visual concepts is as hard as learning the class of all DNF formulae. We also consider the robustness of these results under various types of noise.

Original languageEnglish (US)
Pages (from-to)65-85
Number of pages21
JournalMachine Learning
Issue number1
StatePublished - Jul 1996

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence


Dive into the research topics of 'On learning visual concepts and DNF formulae'. Together they form a unique fingerprint.

Cite this