Game-theoretic learning in distributed control

Jason R. Marden, Jeff S. Shamma

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

In distributed architecture control problems, there is a collection of interconnected decision-making components that seek to realize desirable collective behaviors through local interactions and by processing local information. Applications range from autonomous vehicles to energy to transportation. One approach to control of such distributed architectures is to view the components as players in a game. In this approach, two design considerations are the components' incentives and the rules that dictate how components react to the decisions of other components. In game-theoretic language, the incentives are defined through utility functions, and the reaction rules are online learning dynamics. This chapter presents an overview of this approach, covering basic concepts in game theory, special game classes, measures of distributed efficiency, utility design, and online learning rules, all with the interpretation of using game theory as a prescriptive paradigm for distributed control design.

Original languageEnglish (US)
Title of host publicationHandbook of Dynamic Game Theory
PublisherSpringer
Pages511-546
Number of pages36
ISBN (Electronic)9783319443744
ISBN (Print)9783319443737
DOIs
StatePublished - Aug 12 2018
Externally publishedYes

Keywords

  • Distributed decision systems
  • Evolutionary games
  • Learning in games
  • Multiagent systems

ASJC Scopus subject areas

  • Mathematics(all)
  • Economics, Econometrics and Finance(all)
  • Business, Management and Accounting(all)

Fingerprint

Dive into the research topics of 'Game-theoretic learning in distributed control'. Together they form a unique fingerprint.

Cite this