TY - GEN
T1 - Partially-Observed Discrete-Time Risk-Sensitive Mean-Field Games
AU - Saldi, Naci
AU - Basar, Tamer
AU - Raginsky, Maxim
N1 - Funding Information:
This research is supported by The Scientific and Technological Research Council of Turkey (TÜB˙TAK) B˙DEB 2232 Research Grant, in part by the Army Research Laboratory under Cooperative Agreement W911NF-17-2-0196, in part by the Air Force Office of Scientific Research (AFOSR) grant FA9550-19-1-0353, and in part by the Office of Naval Research under (ONR) MURI grant N00014-16-1-2710 and grant N00014-12-1-0998.
Funding Information:
This research is supported by The Scientific and Technological Research Council of Turkey (TUBITAK) BIDEB 2232
Publisher Copyright:
© 2019 IEEE.
PY - 2019/12
Y1 - 2019/12
N2 - We consider in this paper a general class of discrete-time partially-observed mean-field games with Polish state, action, and measurement spaces and with risk-sensitive (exponential) cost functions which capture the risk-averse behaviour of each agent. As standard in mean-field game models, here each agent is weakly coupled with the rest of the population through its individual cost and state dynamics via the empirical distribution of the states. We first establish the mean-field equilibrium in the infinite-population limit by first transforming the risk-sensitive problem to one with risk-neutral (that is, additive instead of multiplicative) cost function, and then employing the technique of converting the underlying original partially-observed stochastic control problem to a fully observed one on the belief space and the principle of dynamic programming. Then, we show that the mean-field equilibrium policy, when adopted by each agent, forms an approximate Nash equilibrium for games with sufficiently many agents.
AB - We consider in this paper a general class of discrete-time partially-observed mean-field games with Polish state, action, and measurement spaces and with risk-sensitive (exponential) cost functions which capture the risk-averse behaviour of each agent. As standard in mean-field game models, here each agent is weakly coupled with the rest of the population through its individual cost and state dynamics via the empirical distribution of the states. We first establish the mean-field equilibrium in the infinite-population limit by first transforming the risk-sensitive problem to one with risk-neutral (that is, additive instead of multiplicative) cost function, and then employing the technique of converting the underlying original partially-observed stochastic control problem to a fully observed one on the belief space and the principle of dynamic programming. Then, we show that the mean-field equilibrium policy, when adopted by each agent, forms an approximate Nash equilibrium for games with sufficiently many agents.
UR - http://www.scopus.com/inward/record.url?scp=85082484381&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85082484381&partnerID=8YFLogxK
U2 - 10.1109/CDC40024.2019.9029343
DO - 10.1109/CDC40024.2019.9029343
M3 - Conference contribution
AN - SCOPUS:85082484381
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 317
EP - 322
BT - 2019 IEEE 58th Conference on Decision and Control, CDC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 58th IEEE Conference on Decision and Control, CDC 2019
Y2 - 11 December 2019 through 13 December 2019
ER -