Abstract
Collision-free path planning is a major challenge in managing unmanned aerial vehicles (UAVs) fleets, especially in uncertain environments. In this paper, we consider the design of UAV routing policies using multi-agent reinforcement learning, and propose a Multi-resolution, Multi-agent, Mean-field reinforcement learning algorithm, named 3M-RL, for flight planning, where multiple vehicles need to avoid collisions with each other while moving towards their destinations. In the system we consider, each UAV makes decisions based on local observations, and does not communicate with other UAVs. The algorithm trains a routing policy using an Actor-Critic neural network with multi-resolution observations, including detailed local information and aggregated global information based on mean-field. The algorithm tackles the curse-of-dimensionality problem in multi-agent reinforcement learning and provides a scalable solution. We test our algorithm in different complex scenarios in both 2D and 3D space and our simulation results show that 3M-RL result in good routing policies.
Original language | English (US) |
---|---|
Pages (from-to) | 8985-8996 |
Number of pages | 12 |
Journal | IEEE Transactions on Intelligent Transportation Systems |
Volume | 23 |
Issue number | 7 |
DOIs | |
State | Published - Jul 2022 |
Keywords
- actor-critic.
- Atmospheric modeling
- Collision avoidance
- mean-field
- Multiagent reinforcement learning
- Planning
- Reinforcement learning
- Routing
- Three-dimensional displays
- Unmanned aerial vehicles
ASJC Scopus subject areas
- Automotive Engineering
- Mechanical Engineering
- Computer Science Applications