TY - JOUR
T1 - Solving Reward-Collecting Problems with UAVs
T2 - A Comparison of Online Optimization and Q-Learning
AU - Liu, Yixuan
AU - Vogiatzis, Chrysafis
AU - Yoshida, Ruriko
AU - Morman, Erich
N1 - The authors would like to thank CAPT (Ret) Jeff Kline at Naval Postgraduate School and Dr.?Timothy Bentley at the Office of Naval Research for sharing information on the problem of Autonomous Casualty Evacuation.
R.Y. is partially supported by NSF DMS 1916037 and Consortium for Robotics and Unmanned Systems Education and Research (CRUSER).
PY - 2022/2
Y1 - 2022/2
N2 - Uncrewed autonomous vehicles (UAVs) have made significant contributions to reconnaissance and surveillance missions in past US military campaigns. As the prevalence of UAVs increases, there has also been improvements in counter-UAV technology that makes it difficult for them to successfully obtain valuable intelligence within an area of interest. Hence, it has become important that modern UAVs can accomplish their missions while maximizing their chances of survival. In this work, we specifically study the problem of identifying a short path from a designated start to a goal, while collecting all rewards and avoiding adversaries that move randomly on the grid. We also provide a possible application of the framework in a military setting, that of autonomous casualty evacuation. We present a comparison of three methods to solve this problem: namely we implement a Deep Q-Learning model, an ε-greedy tabular Q-Learning model, and an online optimization framework. Our computational experiments, designed using simple grid-world environments with random adversaries showcase how these approaches work and compare them in terms of performance, accuracy, and computational time.
AB - Uncrewed autonomous vehicles (UAVs) have made significant contributions to reconnaissance and surveillance missions in past US military campaigns. As the prevalence of UAVs increases, there has also been improvements in counter-UAV technology that makes it difficult for them to successfully obtain valuable intelligence within an area of interest. Hence, it has become important that modern UAVs can accomplish their missions while maximizing their chances of survival. In this work, we specifically study the problem of identifying a short path from a designated start to a goal, while collecting all rewards and avoiding adversaries that move randomly on the grid. We also provide a possible application of the framework in a military setting, that of autonomous casualty evacuation. We present a comparison of three methods to solve this problem: namely we implement a Deep Q-Learning model, an ε-greedy tabular Q-Learning model, and an online optimization framework. Our computational experiments, designed using simple grid-world environments with random adversaries showcase how these approaches work and compare them in terms of performance, accuracy, and computational time.
KW - Deep Q-learning
KW - Online optimization
KW - Random adversaries
KW - Reinforcement learning
KW - Uncrewed autonomous vehicles
UR - http://www.scopus.com/inward/record.url?scp=85124701993&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124701993&partnerID=8YFLogxK
U2 - 10.1007/s10846-021-01548-2
DO - 10.1007/s10846-021-01548-2
M3 - Article
AN - SCOPUS:85124701993
SN - 0921-0296
VL - 104
JO - Journal of Intelligent and Robotic Systems: Theory and Applications
JF - Journal of Intelligent and Robotic Systems: Theory and Applications
IS - 2
M1 - 35
ER -