Continuous-time stochastic Mirror Descent on a network: Variance reduction, consensus, convergence

Maxim Raginsky, Jake Bouvrie

Research output: Contribution to journalConference articlepeer-review


The method of Mirror Descent (MD), originally proposed by Nemirovski and Yudin in the late 1970s, has recently seen a major resurgence in the fields of large-scale optimization and machine learning. In a nutshell, MD is a primal-dual method that can be adapted to the geometry of the optimization problem at hand through the choice of a suitable strongly convex potential function. We study a stochastic, continuous-time variant of MD performed by a network of coupled noisy agents (processors). The overall dynamics is described by a system of stochastic differential equations, coupled linearly through the network Laplacian. We address the impact of the network topology (encoded in the spectrum of the Laplacian) on the speed of convergence of the 'mean-field' component to the optimum. We show that this convergence is particularly rapid whenever the potential function can be chosen in such a way that the resulting mean-field dynamics in the dual space follows an Ornstein-Uhlenbeck process.

Original languageEnglish (US)
Article number6426639
Pages (from-to)6793-6800
Number of pages8
JournalProceedings of the IEEE Conference on Decision and Control
StatePublished - 2012
Event51st IEEE Conference on Decision and Control, CDC 2012 - Maui, HI, United States
Duration: Dec 10 2012Dec 13 2012

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization


Dive into the research topics of 'Continuous-time stochastic Mirror Descent on a network: Variance reduction, consensus, convergence'. Together they form a unique fingerprint.

Cite this