Solving Reward-Collecting Problems with UAVs: A Comparison of Online Optimization and Q-Learning

Yixuan Liu, Chrysafis Vogiatzis, Ruriko Yoshida, Erich Morman

Research output: Contribution to journalArticlepeer-review

Abstract

Uncrewed autonomous vehicles (UAVs) have made significant contributions to reconnaissance and surveillance missions in past US military campaigns. As the prevalence of UAVs increases, there has also been improvements in counter-UAV technology that makes it difficult for them to successfully obtain valuable intelligence within an area of interest. Hence, it has become important that modern UAVs can accomplish their missions while maximizing their chances of survival. In this work, we specifically study the problem of identifying a short path from a designated start to a goal, while collecting all rewards and avoiding adversaries that move randomly on the grid. We also provide a possible application of the framework in a military setting, that of autonomous casualty evacuation. We present a comparison of three methods to solve this problem: namely we implement a Deep Q-Learning model, an ε-greedy tabular Q-Learning model, and an online optimization framework. Our computational experiments, designed using simple grid-world environments with random adversaries showcase how these approaches work and compare them in terms of performance, accuracy, and computational time.

Original languageEnglish (US)
Article number35
JournalJournal of Intelligent and Robotic Systems: Theory and Applications
Volume104
Issue number2
DOIs
StatePublished - Feb 2022

Keywords

  • Deep Q-learning
  • Online optimization
  • Random adversaries
  • Reinforcement learning
  • Uncrewed autonomous vehicles

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Mechanical Engineering
  • Industrial and Manufacturing Engineering
  • Electrical and Electronic Engineering
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Solving Reward-Collecting Problems with UAVs: A Comparison of Online Optimization and Q-Learning'. Together they form a unique fingerprint.

Cite this