Learning the Kalman Filter with Fine-Grained Sample Complexity

Xiangyuan Zhang, Bin Hu, Tamer Basar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We develop the first end-to-end sample complexity of model-free policy gradient (PG) methods in discrete-time infinite-horizon Kalman filtering. Specifically, we introduce the receding-horizon policy gradient (RHPG-KF) framework and demonstrate sample complexity for RHPG-KF in learning a stabilizing filter that is ϵ-close to the optimal Kalman filter. Notably, the proposed RHPG-KF framework does not require the system to be open-loop stable nor assume any prior knowledge of a stabilizing filter. Our results shed light on applying model-free PG methods to control a linear dynamical system where the state measurements could be corrupted by statistical noises and other (possibly adversarial) disturbances.

Original languageEnglish (US)
Title of host publication2023 American Control Conference, ACC 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4549-4554
Number of pages6
ISBN (Electronic)9798350328066
DOIs
StatePublished - 2023
Event2023 American Control Conference, ACC 2023 - San Diego, United States
Duration: May 31 2023Jun 2 2023

Publication series

NameProceedings of the American Control Conference
Volume2023-May
ISSN (Print)0743-1619

Conference

Conference2023 American Control Conference, ACC 2023
Country/TerritoryUnited States
CitySan Diego
Period5/31/236/2/23

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Learning the Kalman Filter with Fine-Grained Sample Complexity'. Together they form a unique fingerprint.

Cite this