TY - GEN
T1 - Deep reinforcement learning for UAV-assisted emergency response
AU - Lee, Isabella
AU - Babu, Vignesh
AU - Caesar, Matthew
AU - Nicol, David
N1 - Funding Information:
This work was supported by Boeing Research & Technology (BR&T) under the Collaborative Research Project BRT-Z0518-5050: Deep Learning-based Tactical IoT Networking in Contested Environments. The authors would like to thank Dr. Jae H. Kim (Boeing PM) for his advice and guidance throughout the project.
Publisher Copyright:
© 2020 ACM.
PY - 2020/12/7
Y1 - 2020/12/7
N2 - In the aftermath of a disaster, the ability to reliably communicate and coordinate emergency response could make a meaningful difference in the number of lives saved or lost. However, post-disaster areas tend to have limited functioning communication network infrastructure while emergency response teams are carrying increasingly more devices, such as sensors and video transmitting equipment, which can be low-powered with limited transmission ranges. In such scenarios, unmanned aerial vehicles (UAVs) can be used as relays to connect these devices with each other. Since first responders are likely to be constantly mobile, the problem of where these UAVs are placed and how they move in response to the changing environment could have a large effect on the number of connections this UAV relay network is able to maintain. In this work, we propose DroneDR, a reinforcement learning framework for UAV positioning that uses information about connectivity requirements and user node positions to decide how to move each UAV in the network while maintaining connectivity between UAVs. The proposed approach is shown to outperform other greedy heuristics across a broad range of scenarios and demonstrates the potential in using reinforcement learning techniques to aid communication during disaster relief operations.
AB - In the aftermath of a disaster, the ability to reliably communicate and coordinate emergency response could make a meaningful difference in the number of lives saved or lost. However, post-disaster areas tend to have limited functioning communication network infrastructure while emergency response teams are carrying increasingly more devices, such as sensors and video transmitting equipment, which can be low-powered with limited transmission ranges. In such scenarios, unmanned aerial vehicles (UAVs) can be used as relays to connect these devices with each other. Since first responders are likely to be constantly mobile, the problem of where these UAVs are placed and how they move in response to the changing environment could have a large effect on the number of connections this UAV relay network is able to maintain. In this work, we propose DroneDR, a reinforcement learning framework for UAV positioning that uses information about connectivity requirements and user node positions to decide how to move each UAV in the network while maintaining connectivity between UAVs. The proposed approach is shown to outperform other greedy heuristics across a broad range of scenarios and demonstrates the potential in using reinforcement learning techniques to aid communication during disaster relief operations.
KW - Disaster relief
KW - IoT network
KW - Reinforcement learning
KW - UAV network
UR - http://www.scopus.com/inward/record.url?scp=85112718624&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112718624&partnerID=8YFLogxK
U2 - 10.1145/3448891.3448919
DO - 10.1145/3448891.3448919
M3 - Conference contribution
AN - SCOPUS:85112718624
T3 - ACM International Conference Proceeding Series
SP - 327
EP - 336
BT - Proceedings of the 17th EAI International Conference on Mobile and Ubiquitous Systems
PB - Association for Computing Machinery
T2 - 17th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2020
Y2 - 7 December 2020 through 9 December 2020
ER -