Model-Free μ Synthesis via Adversarial Reinforcement Learning

Darioush Keivan, Aaron Havens, Peter Seiler, Geir Dullerud, Bin Hu

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Motivated by the recent empirical success of policy-based reinforcement learning (RL), there has been a research trend studying the performance of policy-based RL methods on standard control benchmark problems. In this paper, we examine the effectiveness of policy-based RL methods on an important robust control problem, namely μ synthesis. We build a connection between robust adversarial RL and μ synthesis, and develop a model-free version of the well-known DK-iteration for solving state-feedback μ synthesis with static D-scaling. In the proposed algorithm, the K step mimics the classical central path algorithm via incorporating a recently-developed double-loop adversarial RL method as a subroutine, and the D step is based on model-free finite difference approximation. Extensive numerical study is also presented to demonstrate the utility of our proposed model-free algorithm. Our study sheds new light on the connections between adversarial RL and robust control.

Original languageEnglish (US)
Title of host publication2022 American Control Conference, ACC 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages7
ISBN (Electronic)9781665451963
StatePublished - 2022
Event2022 American Control Conference, ACC 2022 - Atlanta, United States
Duration: Jun 8 2022Jun 10 2022

Publication series

NameProceedings of the American Control Conference
ISSN (Print)0743-1619


Conference2022 American Control Conference, ACC 2022
Country/TerritoryUnited States

ASJC Scopus subject areas

  • Electrical and Electronic Engineering


Dive into the research topics of 'Model-Free μ Synthesis via Adversarial Reinforcement Learning'. Together they form a unique fingerprint.

Cite this