A Self-Play Posterior Sampling Algorithm for Zero-Sum Markov Games

Wei Xiong, Han Zhong, Chengshuai Shi, Cong Shen, Tong Zhang

Research output: Contribution to journalConference articlepeer-review

Abstract

Existing studies on provably efficient algorithms for Markov games (MGs) almost exclusively build on the “optimism in the face of uncertainty” (OFU) principle. This work focuses on a different approach of posterior sampling, which is celebrated in many bandits and reinforcement learning settings but remains under-explored for MGs. Specifically, for episodic two-player zero-sum MGs, a novel posterior sampling algorithm is developed with general function approximation. Theoretical analysis demonstrates_that the posterior sampling algorithm admits a T-regret bound for problems with a low multi-agent decoupling coefficient, which is a new complexity measure for MGs, where T denotes the number of episodes. When specialized to linear MGs, the obtained regret bound matches the state-of-the-art results. To the best of our knowledge, this is the first provably efficient posterior sampling algorithm for MGs with frequentist regret guarantees, which enriches the toolbox for MGs and promotes the broad applicability of posterior sampling.

Original languageEnglish (US)
Pages (from-to)24496-24523
Number of pages28
JournalProceedings of Machine Learning Research
Volume162
StatePublished - 2022
Externally publishedYes
Event39th International Conference on Machine Learning, ICML 2022 - Baltimore, United States
Duration: Jul 17 2022Jul 23 2022

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'A Self-Play Posterior Sampling Algorithm for Zero-Sum Markov Games'. Together they form a unique fingerprint.

Cite this