Distributed adaptive Newton methods with global superlinear convergence

Jiaqi Zhang, Keyou You, Tamer Başar

Research output: Contribution to journalArticlepeer-review

Abstract

This paper considers the distributed optimization problem where each node of a peer-to-peer network minimizes a finite sum of objective functions by communicating with its neighboring nodes. In sharp contrast to the existing literature where the fastest distributed algorithms converge either with a global linear or a local superlinear rate, we propose a distributed adaptive Newton (DAN) algorithm with a global quadratic convergence rate. Our key idea lies in the design of a finite-time set-consensus method with Polyak's adaptive stepsize. Moreover, we introduce a low-rank matrix approximation (LA) technique to compress the innovation of Hessian matrix so that each node only needs to transmit message of dimension O(p) (where p is the dimension of decision vectors) per iteration, which is essentially the same as that of first-order methods. Nevertheless, the resulting DAN-LA converges to an optimal solution with a global superlinear rate. Numerical experiments on logistic regression problems are conducted to validate their advantages over existing methods.

Original languageEnglish (US)
Article number110156
JournalAutomatica
Volume138
DOIs
StatePublished - Apr 2022

Keywords

  • Distributed optimization
  • Low-rank approximation
  • Newton method
  • Superlinear convergence

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Distributed adaptive Newton methods with global superlinear convergence'. Together they form a unique fingerprint.

Cite this