Abstract
Recently, policy optimization has received renewed attention from the control community due to various applications in reinforcement learning tasks. In this paper, we investigate the global convergence of the gradient method for quadratic optimal control of discrete-time Markovian jump linear systems (MJLS). First, we study the optimization landscape of direct policy optimization for MJLS, with static state feedback controllers and quadratic performance costs. Despite the non-convexity of the resultant problem, we are still able to identify several useful properties such as coercivity, gradient dominance, and smoothness. Based on these properties, we prove that the gradient method converges to the optimal state feedback controller for MJLS at a linear rate if initialized at a controller which is mean-square stabilizing. This work brings new insights for understanding the performance of the policy gradient method on the Markovian jump linear quadratic control problem.
Original language | English (US) |
---|---|
Pages (from-to) | 1 |
Number of pages | 1 |
Journal | IEEE Transactions on Automatic Control |
DOIs | |
State | Accepted/In press - 2022 |
Keywords
- Convergence
- Costs
- Gradient methods
- Linear systems
- Markov processes
- Markovian jump linear systems
- Optimization
- State feedback
- optimal control
- policy gradient methods
- reinforcement learning
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Science Applications
- Electrical and Electronic Engineering