TY - JOUR
T1 - A Self-Play Posterior Sampling Algorithm for Zero-Sum Markov Games
AU - Xiong, Wei
AU - Zhong, Han
AU - Shi, Chengshuai
AU - Shen, Cong
AU - Zhang, Tong
N1 - WX and TZ acknowledge the funding supported by GRF 16201320 and the Hong Kong Ph.D. Fellowship. The CSs acknowledge the funding support by the US National Science Foundation under Grant ECCS-2029978, ECCS-2033671, and CNS-2002902, and the Bloomberg Data Science Ph.D. Fellowship.
WX and TZ acknowledge the funding supported by GRF 16201320 and the Hong Kong Ph.D. Fellowship. The CSs acknowledge the funding support by the US National Science Foundation under Grant ECCS- 2029978, ECCS-2033671, and CNS-2002902, and the Bloomberg Data Science Ph.D. Fellowship.
PY - 2022
Y1 - 2022
N2 - Existing studies on provably efficient algorithms for Markov games (MGs) almost exclusively build on the “optimism in the face of uncertainty” (OFU) principle. This work focuses on a different approach of posterior sampling, which is celebrated in many bandits and reinforcement learning settings but remains under-explored for MGs. Specifically, for episodic two-player zero-sum MGs, a novel posterior sampling algorithm is developed with general function approximation. Theoretical analysis demonstrates_that the posterior sampling algorithm admits a √T-regret bound for problems with a low multi-agent decoupling coefficient, which is a new complexity measure for MGs, where T denotes the number of episodes. When specialized to linear MGs, the obtained regret bound matches the state-of-the-art results. To the best of our knowledge, this is the first provably efficient posterior sampling algorithm for MGs with frequentist regret guarantees, which enriches the toolbox for MGs and promotes the broad applicability of posterior sampling.
AB - Existing studies on provably efficient algorithms for Markov games (MGs) almost exclusively build on the “optimism in the face of uncertainty” (OFU) principle. This work focuses on a different approach of posterior sampling, which is celebrated in many bandits and reinforcement learning settings but remains under-explored for MGs. Specifically, for episodic two-player zero-sum MGs, a novel posterior sampling algorithm is developed with general function approximation. Theoretical analysis demonstrates_that the posterior sampling algorithm admits a √T-regret bound for problems with a low multi-agent decoupling coefficient, which is a new complexity measure for MGs, where T denotes the number of episodes. When specialized to linear MGs, the obtained regret bound matches the state-of-the-art results. To the best of our knowledge, this is the first provably efficient posterior sampling algorithm for MGs with frequentist regret guarantees, which enriches the toolbox for MGs and promotes the broad applicability of posterior sampling.
UR - http://www.scopus.com/inward/record.url?scp=85146309008&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85146309008&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85146309008
SN - 2640-3498
VL - 162
SP - 24496
EP - 24523
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 39th International Conference on Machine Learning, ICML 2022
Y2 - 17 July 2022 through 23 July 2022
ER -