Abstract
This paper addresses the problem of learning an equilibrium efficiently in general-sum Markov games through decentralized multi-agent reinforcement learning. Given the fundamental difficulty of calculating a Nash equilibrium (NE), we instead aim at finding a coarse correlated equilibrium (CCE), a solution concept that generalizes NE by allowing possible correlations among the agents’ strategies. We propose an algorithm in which each agent independently runs optimistic V-learning (a variant of Q-learning) to efficiently explore the unknown environment, while using a stabilized online mirror descent (OMD) subroutine for policy updates. We show that the agents can find an ϵ-approximate CCE in at most O~ (H6SA/ ϵ2) episodes, where S is the number of states, A is the size of the largest individual action space, and H is the length of an episode. This appears to be the first sample complexity result for learning in generic general-sum Markov games. Our results rely on a novel investigation of an anytime high-probability regret bound for OMD with a dynamic learning rate and weighted regret, which would be of independent interest. One key feature of our algorithm is that it is decentralized, in the sense that each agent has access to only its local information, and is completely oblivious to the presence of others. This way, our algorithm can readily scale up to an arbitrary number of agents, without suffering from the exponential dependence on the number of agents.
Original language | English (US) |
---|---|
Pages (from-to) | 165-186 |
Number of pages | 22 |
Journal | Dynamic Games and Applications |
Volume | 13 |
Issue number | 1 |
DOIs | |
State | Published - Mar 2023 |
Keywords
- Coarse correlated equilibrium
- Markov game
- Reinforcement learning
- Sample complexity
ASJC Scopus subject areas
- Statistics and Probability
- Economics and Econometrics
- Computer Science Applications
- Computer Graphics and Computer-Aided Design
- Computational Theory and Mathematics
- Computational Mathematics
- Applied Mathematics