TY - GEN
T1 - Convergence Guarantees of Policy Optimization Methods for Markovian Jump Linear Systems
AU - Jansch-Porto, Joao Paulo
AU - Hu, Bin
AU - Dullerud, Geir E.
N1 - Publisher Copyright:
© 2020 AACC.
PY - 2020/7
Y1 - 2020/7
N2 - Recently, policy optimization for control purposes has received renewed attention due to the increasing interest in reinforcement learning. In this paper, we investigate the convergence of policy optimization for quadratic control of Markovian jump linear systems (MJLS). First, we study the optimization landscape of direct policy optimization for MJLS, and, in particular, show that despite the non-convexity of the resultant problem the unique stationary point is the global optimal solution. Next, we prove that the Gauss-Newton method and the natural policy gradient method converge to the optimal state feedback controller for MJLS at a linear rate if initialized at a controller which stabilizes the closed-loop dynamics in the mean square sense. We propose a novel Lyapunov argument to fix a key stability issue in the convergence proof. Finally, we present a numerical example to support our theory. Our work brings new insights for understanding the performance of policy learning methods on controlling unknown MJLS.
AB - Recently, policy optimization for control purposes has received renewed attention due to the increasing interest in reinforcement learning. In this paper, we investigate the convergence of policy optimization for quadratic control of Markovian jump linear systems (MJLS). First, we study the optimization landscape of direct policy optimization for MJLS, and, in particular, show that despite the non-convexity of the resultant problem the unique stationary point is the global optimal solution. Next, we prove that the Gauss-Newton method and the natural policy gradient method converge to the optimal state feedback controller for MJLS at a linear rate if initialized at a controller which stabilizes the closed-loop dynamics in the mean square sense. We propose a novel Lyapunov argument to fix a key stability issue in the convergence proof. Finally, we present a numerical example to support our theory. Our work brings new insights for understanding the performance of policy learning methods on controlling unknown MJLS.
UR - http://www.scopus.com/inward/record.url?scp=85089560555&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089560555&partnerID=8YFLogxK
U2 - 10.23919/ACC45564.2020.9147571
DO - 10.23919/ACC45564.2020.9147571
M3 - Conference contribution
AN - SCOPUS:85089560555
T3 - Proceedings of the American Control Conference
SP - 2882
EP - 2887
BT - 2020 American Control Conference, ACC 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 American Control Conference, ACC 2020
Y2 - 1 July 2020 through 3 July 2020
ER -