TY - JOUR
T1 - Approximate Nash equilibria in partially observed stochastic games with mean-field interactions
AU - Saldi, Naci
AU - Başar, Tamer
AU - Raginsky, Maxim
N1 - Funding Information:
Funding: This research was supported in part by the U.S. Air Force Office of Scientific Research through the Multidisciplinary University Research Initiative (MURI) [Grant FA9550-10-1-0573] and in part by the Office of Naval Research [MURI Grants N00014-16-1-2710 and N00014-12-1-0998].
PY - 2019
Y1 - 2019
N2 - Establishing the existence of Nash equilibria for partially observed stochastic dynamic games is known to be quite challenging, with the difficulties stemming from the noisy nature of the measurements available to individual players (agents) and the decentralized nature of this information. When the number of players is sufficiently large and the interactions among agents is of the mean-field type, one way to overcome this challenge is to investigate the infinite-population limit of the problem, which leads to a mean-field game. In this paper, we consider discrete-time partially observed mean-field games with infinite-horizon discounted-cost criteria. Using the technique of converting the original partially observed stochastic control problem to a fully observed one on the belief space and the dynamic programming principle, we establish the existence of Nash equilibria for these game models under very mild technical conditions. Then, we show that the mean-field equilibrium policy, when adopted by each agent, forms an approximate Nash equilibrium for games with sufficiently many agents.
AB - Establishing the existence of Nash equilibria for partially observed stochastic dynamic games is known to be quite challenging, with the difficulties stemming from the noisy nature of the measurements available to individual players (agents) and the decentralized nature of this information. When the number of players is sufficiently large and the interactions among agents is of the mean-field type, one way to overcome this challenge is to investigate the infinite-population limit of the problem, which leads to a mean-field game. In this paper, we consider discrete-time partially observed mean-field games with infinite-horizon discounted-cost criteria. Using the technique of converting the original partially observed stochastic control problem to a fully observed one on the belief space and the dynamic programming principle, we establish the existence of Nash equilibria for these game models under very mild technical conditions. Then, we show that the mean-field equilibrium policy, when adopted by each agent, forms an approximate Nash equilibrium for games with sufficiently many agents.
KW - Approximate Nash equilibrium
KW - Mean-field games
KW - Partially observed stochastic control
UR - http://www.scopus.com/inward/record.url?scp=85071837173&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85071837173&partnerID=8YFLogxK
U2 - 10.1287/moor.2018.0957
DO - 10.1287/moor.2018.0957
M3 - Article
AN - SCOPUS:85071837173
VL - 44
SP - 1006
EP - 1033
JO - Mathematics of Operations Research
JF - Mathematics of Operations Research
SN - 0364-765X
IS - 3
ER -