Distributed Reinforcement Learning for Decentralized Linear Quadratic Control: A Derivative-Free Policy Optimization Approach

Yingying Li, Yujie Tang, Runyu Zhang, Na Li

Research output: Contribution to journalArticlepeer-review

Abstract

This article considers a distributed reinforcement learning problem for decentralized linear quadratic (LQ) control with partial state observations and local costs. We propose a zero-order distributed policy optimization algorithm (ZODPO) that learns linear local controllers in a distributed fashion, leveraging the ideas of policy gradient, zero-order optimization, and consensus algorithms. In ZODPO, each agent estimates the global cost by consensus, and then conducts local policy gradient in parallel based on zero-order gradient estimation. ZODPO only requires limited communication and storage even in large-scale systems. Further, we investigate the nonasymptotic performance of ZODPO and show that the sample complexity to approach a stationary point is polynomial with the error tolerance's inverse and the problem dimensions, demonstrating the scalability of ZODPO. We also show that the controllers generated throughout ZODPO are stabilizing controllers with high probability. Last, we numerically test ZODPO on multizone HVAC systems.

Original languageEnglish (US)
Pages (from-to)6429-6444
Number of pages16
JournalIEEE Transactions on Automatic Control
Volume67
Issue number12
DOIs
StatePublished - Dec 1 2022
Externally publishedYes

Keywords

  • Distributed reinforcement learning (RL)
  • linear quadratic regulator (LQR)
  • zero-order optimization

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Distributed Reinforcement Learning for Decentralized Linear Quadratic Control: A Derivative-Free Policy Optimization Approach'. Together they form a unique fingerprint.

Cite this