TY - GEN
T1 - Game Theory for Autonomy
T2 - 2023 American Control Conference, ACC 2023
AU - Vamvoudakis, Kyriakos G.
AU - Fotiadis, Filippos
AU - Hespanha, Joao P.
AU - Chinchilla, Raphael
AU - Yang, Guosong
AU - Liu, Mushuang
AU - Shamma, Jeff S.
AU - Pavel, Lacra
N1 - Publisher Copyright:
© 2023 American Automatic Control Council.
PY - 2023
Y1 - 2023
N2 - Finding Nash equilibria in non-cooperative games can be, in general, an exceptionally challenging task. This is owed to various factors, including but not limited to the cost functions of the game being nonconvex/nonconcave, the players of the game having limited information about one another, or even due to issues of computational complexity. The present tutorial draws motivation from this harsh reality and provides methods to approximate Nash or min-max equilibria in non-ideal settings using both optimization- and learning-based techniques. The tutorial acknowledges, however, that such techniques may not always converge, but instead lead to oscillations or even chaos. In that respect, tools from passivity and dissipativity theory are provided, which can offer explanations about these divergent behaviors. Finally, the tutorial highlights that, more frequently than often thought, the search for equilibrium policies is simply vain; instead, bounded rationality and non-equilibrium policies can be more realistic to employ owing to some players' learning imperfectly or being relatively naive - "bounded rational."The efficacy of such plays is demonstrated in the context of autonomous driving systems, where it is explicitly shown that they can guarantee vehicle safety.
AB - Finding Nash equilibria in non-cooperative games can be, in general, an exceptionally challenging task. This is owed to various factors, including but not limited to the cost functions of the game being nonconvex/nonconcave, the players of the game having limited information about one another, or even due to issues of computational complexity. The present tutorial draws motivation from this harsh reality and provides methods to approximate Nash or min-max equilibria in non-ideal settings using both optimization- and learning-based techniques. The tutorial acknowledges, however, that such techniques may not always converge, but instead lead to oscillations or even chaos. In that respect, tools from passivity and dissipativity theory are provided, which can offer explanations about these divergent behaviors. Finally, the tutorial highlights that, more frequently than often thought, the search for equilibrium policies is simply vain; instead, bounded rationality and non-equilibrium policies can be more realistic to employ owing to some players' learning imperfectly or being relatively naive - "bounded rational."The efficacy of such plays is demonstrated in the context of autonomous driving systems, where it is explicitly shown that they can guarantee vehicle safety.
UR - http://www.scopus.com/inward/record.url?scp=85167806183&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85167806183&partnerID=8YFLogxK
U2 - 10.23919/ACC55779.2023.10156432
DO - 10.23919/ACC55779.2023.10156432
M3 - Conference contribution
AN - SCOPUS:85167806183
T3 - Proceedings of the American Control Conference
SP - 4363
EP - 4380
BT - 2023 American Control Conference, ACC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 31 May 2023 through 2 June 2023
ER -