TY - GEN
T1 - BFGS-ADMM for Large-Scale Distributed Optimization
AU - Li, Yichuan
AU - Gong, Yonghai
AU - Freris, Nikolaos M.
AU - Voulgaris, Petros
AU - Stipanovic, Dusan
N1 - Funding Information:
*This work was supported by the Ministry of Science and Technology of China under grant 2019YFB2102200 and the Anhui Dept. of Science and Technology under grant 201903a05020049.
Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - We consider a class of distributed optimization problem where the objective function consists of a sum of strongly convex and smooth functions and a (possibly nons-mooth) convex regularizer. A multi-agent network is assumed, where each agent holds a private cost function and cooperates with its neighbors to compute the optimum of the aggregate objective. We propose a quasi-Newton Alternating Direction Method of Multipliers (ADMM) where the primal update is solved inexactly with approximated curvature information. By introducing an intermediate consensus variable, we achieve a block diagonal Hessian which eliminates the need for inner communication loops within the network when computing the update direction. We establish global linear convergence to the optimal primal-dual solution without the need for backtracking line search, under the assumption that component cost functions are strongly convex with Lipschitz continuous gradients. Numerical simulations on real datasets demonstrate the advantages of the proposed method over state of the art.
AB - We consider a class of distributed optimization problem where the objective function consists of a sum of strongly convex and smooth functions and a (possibly nons-mooth) convex regularizer. A multi-agent network is assumed, where each agent holds a private cost function and cooperates with its neighbors to compute the optimum of the aggregate objective. We propose a quasi-Newton Alternating Direction Method of Multipliers (ADMM) where the primal update is solved inexactly with approximated curvature information. By introducing an intermediate consensus variable, we achieve a block diagonal Hessian which eliminates the need for inner communication loops within the network when computing the update direction. We establish global linear convergence to the optimal primal-dual solution without the need for backtracking line search, under the assumption that component cost functions are strongly convex with Lipschitz continuous gradients. Numerical simulations on real datasets demonstrate the advantages of the proposed method over state of the art.
UR - http://www.scopus.com/inward/record.url?scp=85126003379&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85126003379&partnerID=8YFLogxK
U2 - 10.1109/CDC45484.2021.9683678
DO - 10.1109/CDC45484.2021.9683678
M3 - Conference contribution
AN - SCOPUS:85126003379
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 1689
EP - 1694
BT - 60th IEEE Conference on Decision and Control, CDC 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 60th IEEE Conference on Decision and Control, CDC 2021
Y2 - 13 December 2021 through 17 December 2021
ER -