Abstract
Uncrewed autonomous vehicles (UAVs) have made significant contributions to reconnaissance and surveillance missions in past US military campaigns. As the prevalence of UAVs increases, there has also been improvements in counter-UAV technology that makes it difficult for them to successfully obtain valuable intelligence within an area of interest. Hence, it has become important that modern UAVs can accomplish their missions while maximizing their chances of survival. In this work, we specifically study the problem of identifying a short path from a designated start to a goal, while collecting all rewards and avoiding adversaries that move randomly on the grid. We also provide a possible application of the framework in a military setting, that of autonomous casualty evacuation. We present a comparison of three methods to solve this problem: namely we implement a Deep Q-Learning model, an ε-greedy tabular Q-Learning model, and an online optimization framework. Our computational experiments, designed using simple grid-world environments with random adversaries showcase how these approaches work and compare them in terms of performance, accuracy, and computational time.
Original language | English (US) |
---|---|
Article number | 35 |
Journal | Journal of Intelligent and Robotic Systems: Theory and Applications |
Volume | 104 |
Issue number | 2 |
DOIs | |
State | Published - Feb 2022 |
Keywords
- Deep Q-learning
- Online optimization
- Random adversaries
- Reinforcement learning
- Uncrewed autonomous vehicles
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Mechanical Engineering
- Industrial and Manufacturing Engineering
- Electrical and Electronic Engineering
- Artificial Intelligence