Offline Learning in Markov Games with General Function Approximation

Yuheng Zhang, Yu Bai, Nan Jiang

Research output: Contribution to journalConference articlepeer-review

Abstract

We study offline multi-agent reinforcement learning (RL) in Markov games, where the goal is to learn an approximate equilibrium-such as Nash equilibrium and (Coarse) Correlated Equilibrium-from an offline dataset pre-collected from the game. Existing works consider relatively restricted tabular or linear models and handle each equilibria separately. In this work, we provide the first framework for sample-efficient offline learning in Markov games under general function approximation, handling all 3 equilibria in a unified manner. By using Bellman-consistent pessimism, we obtain interval estimation for policies' returns, and use both the upper and the lower bounds to obtain a relaxation on the gap of a candidate policy, which becomes our optimization objective. Our results generalize prior works and provide several additional insights. Importantly, we require a data coverage condition that improves over the recently proposed “unilateral concentrability”. Our condition allows selective coverage of deviation policies that optimally tradeoff between their greediness (as approximate best responses) and coverage, and we show scenarios where this leads to significantly better guarantees. As a new connection, we also show how our algorithmic framework can subsume seemingly different solution concepts designed for the special case of two-player zero-sum games.

Original languageEnglish (US)
Pages (from-to)40804-40829
Number of pages26
JournalProceedings of Machine Learning Research
Volume202
StatePublished - 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: Jul 23 2023Jul 29 2023

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Offline Learning in Markov Games with General Function Approximation'. Together they form a unique fingerprint.

Cite this