BFGS-ADMM for Large-Scale Distributed Optimization

Yichuan Li, Yonghai Gong, Nikolaos M. Freris, Petros Voulgaris, Dusan Stipanovic

Research output: Chapter in Book/Report/Conference proceedingConference contribution


We consider a class of distributed optimization problem where the objective function consists of a sum of strongly convex and smooth functions and a (possibly nons-mooth) convex regularizer. A multi-agent network is assumed, where each agent holds a private cost function and cooperates with its neighbors to compute the optimum of the aggregate objective. We propose a quasi-Newton Alternating Direction Method of Multipliers (ADMM) where the primal update is solved inexactly with approximated curvature information. By introducing an intermediate consensus variable, we achieve a block diagonal Hessian which eliminates the need for inner communication loops within the network when computing the update direction. We establish global linear convergence to the optimal primal-dual solution without the need for backtracking line search, under the assumption that component cost functions are strongly convex with Lipschitz continuous gradients. Numerical simulations on real datasets demonstrate the advantages of the proposed method over state of the art.

Original languageEnglish (US)
Title of host publication60th IEEE Conference on Decision and Control, CDC 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages6
ISBN (Electronic)9781665436595
StatePublished - 2021
Event60th IEEE Conference on Decision and Control, CDC 2021 - Austin, United States
Duration: Dec 13 2021Dec 17 2021

Publication series

NameProceedings of the IEEE Conference on Decision and Control
ISSN (Print)0743-1546
ISSN (Electronic)2576-2370


Conference60th IEEE Conference on Decision and Control, CDC 2021
Country/TerritoryUnited States

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization


Dive into the research topics of 'BFGS-ADMM for Large-Scale Distributed Optimization'. Together they form a unique fingerprint.

Cite this