Abstract
Uncertainty, inherent in most real-world domains, can cause failure of apparently sound classical plans. On the other hand, reasoning with representations that explicitly reflect uncertainty can engender significant, even prohibitive, additional computational costs. This paper contributes a novel approach to planning in uncertain domains. The approach is an extension of classical planning. Machine learning is employed to adjust planner bias in response to execution failures. Thus, the classical planner is conditioned towards producing plans that tend to work when executed in the world. The planner's representations are simple and crisp; uncertainty is represented and reasoned about only during learning. The user-supplied domain theory is left intact. The operator definitions and the planner's projection ability remain as the domain expert intended them. Some structuring of the planner's bias space is required. But with suitable structuring the approach scales well. The learning converges using no more than a polynomial number of examples. The system then probabilistically guarantees that either the plans produced will achieve their goal when executed or that adequate planning is not possible with the domain theory provided. An implemented robotic system is described.
Original language | English (US) |
---|---|
Pages (from-to) | 173-217 |
Number of pages | 45 |
Journal | Artificial Intelligence |
Volume | 89 |
Issue number | 1-2 |
DOIs | |
State | Published - Jan 1997 |
Externally published | Yes |
Keywords
- Explanation-based learning
- Learning
- Machine learning
- Planning
- Planning bias
- Uncertainty
ASJC Scopus subject areas
- Language and Linguistics
- Linguistics and Language
- Artificial Intelligence