Abstract
We revisit in this letter the discrete-time linear quadratic regulator (LQR) problem from the perspective of receding-horizon policy gradient (RHPG), a newly developed model-free learning framework for control applications. We provide a fine-grained sample complexity analysis for RHPG to learn a control policy that is both stabilizing and ϵ-close to the optimal LQR solution, and our algorithm does not require knowing a stabilizing control policy for initialization. Combined with the recent application of RHPG in learning the Kalman filter, we demonstrate the general applicability of RHPG in linear control and estimation with streamlined analyses.
Original language | English (US) |
---|---|
Pages (from-to) | 1664-1669 |
Number of pages | 6 |
Journal | IEEE Control Systems Letters |
Volume | 7 |
DOIs | |
State | Published - 2023 |
Keywords
- Optimal control
- optimization
- reinforcement learning
- sample complexity
ASJC Scopus subject areas
- Control and Optimization
- Control and Systems Engineering