TY - JOUR
T1 - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity
AU - Zhang, Kaiqing
AU - Kakade, Sham M.
AU - Başar, Tamer
AU - Yang, Lin F.
N1 - Publisher Copyright:
©2023 Kaiqing Zhang, Sham M. Kakade, Tamer Başar, and Lin F. Yang.
PY - 2023
Y1 - 2023
N2 - Model-based reinforcement learning (RL), which finds an optimal policy after establishing an empirical model, has long been recognized as one of the cornerstones of RL. It is especially suitable for multi-agent RL (MARL), as it naturally decouples the learning and the planning phases, and avoids the non-stationarity problem when all agents are improving their policies simultaneously. Though intuitive and widely-used, the sample complexity of model-based MARL algorithms has not been fully investigated. In this paper, we aim to address the fundamental question about its sample complexity. We study arguably the most basic MARL setting: two-player discounted zero-sum Markov games, given only access to a generative model. We show that model-based MARL achieves a sample complexity of O∼(|S||A||B|(1 − γ)−3∊−2) for finding the Nash equilibrium (NE) value up to some ∊ error, and the ∊-NE policies with a smooth planning oracle, where γ is the discount factor, and S, A, B denote the state space, and the action spaces for the two agents. We further show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge, by establishing a matching lower bound. This is in contrast to the usual reward-aware setting, where the sample complexity lower bound is Ω∼(|S|(|A| + |B|)(1 − γ)−3∊−2), and this model-based approach is near-optimal with only a gap on the |A|, |B| dependence. Our results not only illustrate the sample-efficiency of this basic model-based MARL approach, but also elaborate on the fundamental tradeoff between its power (easily handling the reward-agnostic case) and limitation (less adaptive and suboptimal in |A|, |B|), which particularly arises in the multi-agent context.
AB - Model-based reinforcement learning (RL), which finds an optimal policy after establishing an empirical model, has long been recognized as one of the cornerstones of RL. It is especially suitable for multi-agent RL (MARL), as it naturally decouples the learning and the planning phases, and avoids the non-stationarity problem when all agents are improving their policies simultaneously. Though intuitive and widely-used, the sample complexity of model-based MARL algorithms has not been fully investigated. In this paper, we aim to address the fundamental question about its sample complexity. We study arguably the most basic MARL setting: two-player discounted zero-sum Markov games, given only access to a generative model. We show that model-based MARL achieves a sample complexity of O∼(|S||A||B|(1 − γ)−3∊−2) for finding the Nash equilibrium (NE) value up to some ∊ error, and the ∊-NE policies with a smooth planning oracle, where γ is the discount factor, and S, A, B denote the state space, and the action spaces for the two agents. We further show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge, by establishing a matching lower bound. This is in contrast to the usual reward-aware setting, where the sample complexity lower bound is Ω∼(|S|(|A| + |B|)(1 − γ)−3∊−2), and this model-based approach is near-optimal with only a gap on the |A|, |B| dependence. Our results not only illustrate the sample-efficiency of this basic model-based MARL approach, but also elaborate on the fundamental tradeoff between its power (easily handling the reward-agnostic case) and limitation (less adaptive and suboptimal in |A|, |B|), which particularly arises in the multi-agent context.
KW - Multi-Agent RL
KW - Near-Optimal Sample Complexity
KW - Zero-Sum Markov Games
UR - http://www.scopus.com/inward/record.url?scp=85214082210&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85214082210&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:85214082210
SN - 1532-4435
VL - 24
JO - Journal of Machine Learning Research
JF - Journal of Machine Learning Research
M1 - 175
ER -