Permissive planning: Extending classical planning to uncertain task domains

Gerald F. DeJong, Scott W. Bennett

Research output: Contribution to journalArticlepeer-review


Uncertainty, inherent in most real-world domains, can cause failure of apparently sound classical plans. On the other hand, reasoning with representations that explicitly reflect uncertainty can engender significant, even prohibitive, additional computational costs. This paper contributes a novel approach to planning in uncertain domains. The approach is an extension of classical planning. Machine learning is employed to adjust planner bias in response to execution failures. Thus, the classical planner is conditioned towards producing plans that tend to work when executed in the world. The planner's representations are simple and crisp; uncertainty is represented and reasoned about only during learning. The user-supplied domain theory is left intact. The operator definitions and the planner's projection ability remain as the domain expert intended them. Some structuring of the planner's bias space is required. But with suitable structuring the approach scales well. The learning converges using no more than a polynomial number of examples. The system then probabilistically guarantees that either the plans produced will achieve their goal when executed or that adequate planning is not possible with the domain theory provided. An implemented robotic system is described.

Original languageEnglish (US)
Pages (from-to)173-217
Number of pages45
JournalArtificial Intelligence
Issue number1-2
StatePublished - Jan 1997


  • Explanation-based learning
  • Learning
  • Machine learning
  • Planning
  • Planning bias
  • Uncertainty

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language
  • Artificial Intelligence


Dive into the research topics of 'Permissive planning: Extending classical planning to uncertain task domains'. Together they form a unique fingerprint.

Cite this