Stochastic subgradient algorithms for strongly convex optimization over distributed networks

Muhammed O. Sayin, N. Denizcan Vanli, Suleyman S. Kozat, Tamer Başar

Research output: Contribution to journalArticlepeer-review

Abstract

We study diffusion and consensus based optimization of a sum of unknown convex objective functions over distributed networks. The only access to these functions is through stochastic gradient oracles, each of which is only available at a different node; and a limited number of gradient oracle calls is allowed at each node. In this framework, we introduce a convex optimization algorithm based on stochastic subgradient descent (SSD) updates. We use a carefully designed time-dependent weighted averaging of the SSD iterates, which yields a convergence rate of O N ffiffiffi N p (1s)T after T gradient updates for each node on a network of N nodes, where 0 ≤ σ < 1 denotes the second largest singular value of the communication matrix. This rate of convergence matches the performance lower bound up to constant terms. Similar to the SSD algorithm, the computational complexity of the proposed algorithm also scales linearly with the dimensionality of the data. Furthermore, the communication load of the proposed method is the same as the communication load of the SSD algorithm. Thus, the proposed algorithm is highly efficient in terms of complexity and communication load. We illustrate the merits of the algorithm with respect to the state-of-art methods over benchmark real life data sets.

Original languageEnglish (US)
Pages (from-to)248-260
Number of pages13
JournalIEEE Transactions on Network Science and Engineering
Volume4
Issue number4
DOIs
StatePublished - Oct 1 2017

Keywords

  • Consensus strategies
  • Convex optimization
  • Diffusion strategies
  • Distributed processing
  • Online learning

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Stochastic subgradient algorithms for strongly convex optimization over distributed networks'. Together they form a unique fingerprint.

Cite this