Abstract
We consider the problem of solving hybrid discrete-continuous Markov Decision Processes (MDPs) that are often encountered in computing optimal policies for complex multi-agent missions with both continuous vehicle dynamics and discrete mission-state transition models, in the presence of potential health degradations and failures of individual agents. A comprehensive Health Aware Planning (HAP) framework is proposed that establishes a feedback between mission planning and vehicle-level learning-focused adaptive controllers through online learned own models of agent health and capabilities. The HAP framework accounts for predicted likelihood of vehicle health degradations captured through probabilistic state-dependent models that are integrated into the MDP formulation. This proactive ability to anticipate health degradation and plan accordingly enables the HAP approach to consistently outperform planners that change the policies only after failures have occurred (reactive planners). The approach is tested on a large-scale (≈ 1010 state-action pairs) long-duration (persistent) target tracking scenario using a novel on-trajectory planning algorithm, and demonstrated to sustain higher mission performance by reducing the number of failures and re-assessing Unmanned Aerial Vehicle (UAV) capabilities.
Original language | English (US) |
---|---|
Pages (from-to) | 89-107 |
Number of pages | 19 |
Journal | Unmanned Systems |
Volume | 3 |
Issue number | 2 |
DOIs | |
State | Published - Apr 1 2015 |
Externally published | Yes |
Keywords
- Health aware systems
- learning focused adaptive control
- planning under uncertainty
ASJC Scopus subject areas
- Control and Systems Engineering
- Automotive Engineering
- Aerospace Engineering
- Control and Optimization