Abstract
We consider qualitative strategy synthesis for the formalism called consumption Markov decision processes. This formalism can model the dynamics of an agent that operates under resource constraints in a stochastic environment. The presented algorithms work in time polynomial with respect to the representation of the model and they synthesize strategies ensuring that a given set of goal states will be reached (once or infinitely many times) with probability 1 without resource exhaustion. In particular, when the amount of resource becomes too low to safely continue in the mission, the strategy changes course of the agent toward one of a designated set of reload states where the agent replenishes the resource to full capacity; with a sufficient amount of resource, the agent attempts to fulfill the mission again. We also present two heuristics that attempt to reduce the expected time that the agent needs to fulfill the given mission, a parameter important in practical planning. The presented algorithms were implemented, and the numerical examples demonstrate the effectiveness (in terms of computation time) of the planning approach based on consumption Markov decision processes and the positive impact of the two heuristics on planning in a realistic example.
Original language | English (US) |
---|---|
Pages (from-to) | 4586-4601 |
Number of pages | 16 |
Journal | IEEE Transactions on Automatic Control |
Volume | 68 |
Issue number | 8 |
DOIs | |
State | Published - Aug 1 2023 |
Keywords
- Consumption Markov decision process (CMDP)
- planning
- resource constraints
- strategy synthesis
ASJC Scopus subject areas
- Electrical and Electronic Engineering
- Control and Systems Engineering
- Computer Science Applications