Convergence Guarantees of Policy Optimization Methods for Markovian Jump Linear Systems

Joao Paulo Jansch-Porto, Bin Hu, Geir E. Dullerud

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Recently, policy optimization for control purposes has received renewed attention due to the increasing interest in reinforcement learning. In this paper, we investigate the convergence of policy optimization for quadratic control of Markovian jump linear systems (MJLS). First, we study the optimization landscape of direct policy optimization for MJLS, and, in particular, show that despite the non-convexity of the resultant problem the unique stationary point is the global optimal solution. Next, we prove that the Gauss-Newton method and the natural policy gradient method converge to the optimal state feedback controller for MJLS at a linear rate if initialized at a controller which stabilizes the closed-loop dynamics in the mean square sense. We propose a novel Lyapunov argument to fix a key stability issue in the convergence proof. Finally, we present a numerical example to support our theory. Our work brings new insights for understanding the performance of policy learning methods on controlling unknown MJLS.

Original languageEnglish (US)
Title of host publication2020 American Control Conference, ACC 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages6
ISBN (Electronic)9781538682661
StatePublished - Jul 2020
Externally publishedYes
Event2020 American Control Conference, ACC 2020 - Denver, United States
Duration: Jul 1 2020Jul 3 2020

Publication series

NameProceedings of the American Control Conference
ISSN (Print)0743-1619


Conference2020 American Control Conference, ACC 2020
Country/TerritoryUnited States

ASJC Scopus subject areas

  • Electrical and Electronic Engineering


Dive into the research topics of 'Convergence Guarantees of Policy Optimization Methods for Markovian Jump Linear Systems'. Together they form a unique fingerprint.

Cite this