Approximate Nash equilibria in partially observed stochastic games with mean-field interactions

Research output: Contribution to journalArticle

Abstract

Establishing the existence of Nash equilibria for partially observed stochastic dynamic games is known to be quite challenging, with the difficulties stemming from the noisy nature of the measurements available to individual players (agents) and the decentralized nature of this information. When the number of players is sufficiently large and the interactions among agents is of the mean-field type, one way to overcome this challenge is to investigate the infinite-population limit of the problem, which leads to a mean-field game. In this paper, we consider discrete-time partially observed mean-field games with infinite-horizon discounted-cost criteria. Using the technique of converting the original partially observed stochastic control problem to a fully observed one on the belief space and the dynamic programming principle, we establish the existence of Nash equilibria for these game models under very mild technical conditions. Then, we show that the mean-field equilibrium policy, when adopted by each agent, forms an approximate Nash equilibrium for games with sufficiently many agents.

Original languageEnglish (US)
Pages (from-to)1006-1033
Number of pages28
JournalMathematics of Operations Research
Volume44
Issue number3
DOIs
StatePublished - Jan 1 2019

Fingerprint

Stochastic Games
Nash Equilibrium
Mean Field
Game
Interaction
Dynamic Programming Principle
Dynamic Games
Stochastic Control
Infinite Horizon
Stochastic Dynamics
Dynamic programming
Decentralized
Control Problem
Discrete-time
Stochastic games
Nash equilibrium
Costs

Keywords

  • Approximate Nash equilibrium
  • Mean-field games
  • Partially observed stochastic control

ASJC Scopus subject areas

  • Mathematics(all)
  • Computer Science Applications
  • Management Science and Operations Research

Cite this

Approximate Nash equilibria in partially observed stochastic games with mean-field interactions. / Saldi, Naci; Başar, Tamer; Raginsky, Maxim.

In: Mathematics of Operations Research, Vol. 44, No. 3, 01.01.2019, p. 1006-1033.

Research output: Contribution to journalArticle

@article{def5bc04c59c4a4cb24b638e55ade1ae,
title = "Approximate Nash equilibria in partially observed stochastic games with mean-field interactions",
abstract = "Establishing the existence of Nash equilibria for partially observed stochastic dynamic games is known to be quite challenging, with the difficulties stemming from the noisy nature of the measurements available to individual players (agents) and the decentralized nature of this information. When the number of players is sufficiently large and the interactions among agents is of the mean-field type, one way to overcome this challenge is to investigate the infinite-population limit of the problem, which leads to a mean-field game. In this paper, we consider discrete-time partially observed mean-field games with infinite-horizon discounted-cost criteria. Using the technique of converting the original partially observed stochastic control problem to a fully observed one on the belief space and the dynamic programming principle, we establish the existence of Nash equilibria for these game models under very mild technical conditions. Then, we show that the mean-field equilibrium policy, when adopted by each agent, forms an approximate Nash equilibrium for games with sufficiently many agents.",
keywords = "Approximate Nash equilibrium, Mean-field games, Partially observed stochastic control",
author = "Naci Saldi and Tamer Başar and Maxim Raginsky",
year = "2019",
month = "1",
day = "1",
doi = "10.1287/moor.2018.0957",
language = "English (US)",
volume = "44",
pages = "1006--1033",
journal = "Mathematics of Operations Research",
issn = "0364-765X",
publisher = "INFORMS Inst.for Operations Res.and the Management Sciences",
number = "3",

}

TY - JOUR

T1 - Approximate Nash equilibria in partially observed stochastic games with mean-field interactions

AU - Saldi, Naci

AU - Başar, Tamer

AU - Raginsky, Maxim

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Establishing the existence of Nash equilibria for partially observed stochastic dynamic games is known to be quite challenging, with the difficulties stemming from the noisy nature of the measurements available to individual players (agents) and the decentralized nature of this information. When the number of players is sufficiently large and the interactions among agents is of the mean-field type, one way to overcome this challenge is to investigate the infinite-population limit of the problem, which leads to a mean-field game. In this paper, we consider discrete-time partially observed mean-field games with infinite-horizon discounted-cost criteria. Using the technique of converting the original partially observed stochastic control problem to a fully observed one on the belief space and the dynamic programming principle, we establish the existence of Nash equilibria for these game models under very mild technical conditions. Then, we show that the mean-field equilibrium policy, when adopted by each agent, forms an approximate Nash equilibrium for games with sufficiently many agents.

AB - Establishing the existence of Nash equilibria for partially observed stochastic dynamic games is known to be quite challenging, with the difficulties stemming from the noisy nature of the measurements available to individual players (agents) and the decentralized nature of this information. When the number of players is sufficiently large and the interactions among agents is of the mean-field type, one way to overcome this challenge is to investigate the infinite-population limit of the problem, which leads to a mean-field game. In this paper, we consider discrete-time partially observed mean-field games with infinite-horizon discounted-cost criteria. Using the technique of converting the original partially observed stochastic control problem to a fully observed one on the belief space and the dynamic programming principle, we establish the existence of Nash equilibria for these game models under very mild technical conditions. Then, we show that the mean-field equilibrium policy, when adopted by each agent, forms an approximate Nash equilibrium for games with sufficiently many agents.

KW - Approximate Nash equilibrium

KW - Mean-field games

KW - Partially observed stochastic control

UR - http://www.scopus.com/inward/record.url?scp=85071837173&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85071837173&partnerID=8YFLogxK

U2 - 10.1287/moor.2018.0957

DO - 10.1287/moor.2018.0957

M3 - Article

AN - SCOPUS:85071837173

VL - 44

SP - 1006

EP - 1033

JO - Mathematics of Operations Research

JF - Mathematics of Operations Research

SN - 0364-765X

IS - 3

ER -