Robust Cooperative Multi-Agent Reinforcement Learning: A Mean-Field Type Game Perspective

Muhammad Aneeq uz Zaman, Mathieu Laurière, Alec Koppel, Tamer Başar

Research output: Contribution to journalConference articlepeer-review

Abstract

In this paper, we study the problem of robust cooperative multi-agent reinforcement learning (RL) where a large number of cooperative agents with distributed information aim to learn policies in the presence of stochastic and non-stochastic uncertainties whose distributions are respectively known and unknown. Focusing on policy optimization that accounts for both types of uncertainties, we formulate the problem in a worst-case (minimax) framework, which is is intractable in general. Thus, we focus on the Linear Quadratic setting to derive benchmark solutions. First, since no standard theory exists for this problem due to the distributed information structure, we utilize the Mean-Field Type Game (MFTG) paradigm to establish guarantees on the solution quality in the sense of achieved Nash equilibrium of the MFTG. This in turn allows us to compare the performance against the corresponding original robust multi-agent control problem. Then, we propose a Receding-horizon Gradient Descent Ascent RL algorithm to find the MFTG Nash equilibrium and we prove a non-asymptotic rate of convergence. Finally, we provide numerical experiments to demonstrate the efficacy of our approach relative to a baseline algorithm.

Original languageEnglish (US)
Pages (from-to)770-783
Number of pages14
JournalProceedings of Machine Learning Research
Volume242
StatePublished - 2024
Externally publishedYes
Event6th Annual Learning for Dynamics and Control Conference, L4DC 2024 - Oxford, United Kingdom
Duration: Jul 15 2024Jul 17 2024

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Robust Cooperative Multi-Agent Reinforcement Learning: A Mean-Field Type Game Perspective'. Together they form a unique fingerprint.

Cite this