TY - JOUR
T1 - Global Convergence of Policy Gradient Primal-Dual Methods for Risk-Constrained LQRs
AU - Zhao, Feiran
AU - You, Keyou
AU - Basar, Tamer
N1 - The work of Feiran Zhao and Keyou You was supported in part by the National Natural Science Foundation of China under Grant 62033006, in part by the Tsinghua-Foshan Innovation Special Fund (TFISF), and in part by Tsinghua University Initiative Scientific Research Program. The work of Tamer Bą sar was supported by the ONR MURI under Grant N00014-16-1-2710.
PY - 2023/5/1
Y1 - 2023/5/1
N2 - While the techniques in optimal control theory are often model-based, the policy optimization (PO) approach directly optimizes the performance metric of interest. Even though it has been an essential approach for reinforcement learning problems, there is little theoretical understanding of its performance. In this article, we focus on the risk-constrained linear quadratic regulator problem via the PO approach, which requires addressing a challenging nonconvex constrained optimization problem. To solve it, we first build on our earlier result that an optimal policy has a time-invariant affine structure to show that the associated Lagrangian function is coercive, locally gradient dominated, and has a local Lipschitz continuous gradient, based on which we establish strong duality. Then, we design policy gradient primal-dual methods with global convergence guarantees in both model-based and sample-based settings. Finally, we use samples of system trajectories in simulations to validate our methods.
AB - While the techniques in optimal control theory are often model-based, the policy optimization (PO) approach directly optimizes the performance metric of interest. Even though it has been an essential approach for reinforcement learning problems, there is little theoretical understanding of its performance. In this article, we focus on the risk-constrained linear quadratic regulator problem via the PO approach, which requires addressing a challenging nonconvex constrained optimization problem. To solve it, we first build on our earlier result that an optimal policy has a time-invariant affine structure to show that the associated Lagrangian function is coercive, locally gradient dominated, and has a local Lipschitz continuous gradient, based on which we establish strong duality. Then, we design policy gradient primal-dual methods with global convergence guarantees in both model-based and sample-based settings. Finally, we use samples of system trajectories in simulations to validate our methods.
KW - Gradient descent
KW - policy optimization (PO)
KW - reinforcement learning
KW - risk-constrained linear quadratic regulator (RC-LQR)
KW - stochastic control
UR - http://www.scopus.com/inward/record.url?scp=85147217314&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85147217314&partnerID=8YFLogxK
U2 - 10.1109/TAC.2023.3234176
DO - 10.1109/TAC.2023.3234176
M3 - Article
AN - SCOPUS:85147217314
SN - 0018-9286
VL - 68
SP - 2934
EP - 2949
JO - IEEE Transactions on Automatic Control
JF - IEEE Transactions on Automatic Control
IS - 5
ER -