Abstract
The method of Mirror Descent (MD), originally proposed by Nemirovski and Yudin in the late 1970s, has recently seen a major resurgence in the fields of large-scale optimization and machine learning. In a nutshell, MD is a primal-dual method that can be adapted to the geometry of the optimization problem at hand through the choice of a suitable strongly convex potential function. We study a stochastic, continuous-time variant of MD performed by a network of coupled noisy agents (processors). The overall dynamics is described by a system of stochastic differential equations, coupled linearly through the network Laplacian. We address the impact of the network topology (encoded in the spectrum of the Laplacian) on the speed of convergence of the 'mean-field' component to the optimum. We show that this convergence is particularly rapid whenever the potential function can be chosen in such a way that the resulting mean-field dynamics in the dual space follows an Ornstein-Uhlenbeck process.
Original language | English (US) |
---|---|
Article number | 6426639 |
Pages (from-to) | 6793-6800 |
Number of pages | 8 |
Journal | Proceedings of the IEEE Conference on Decision and Control |
DOIs | |
State | Published - 2012 |
Event | 51st IEEE Conference on Decision and Control, CDC 2012 - Maui, HI, United States Duration: Dec 10 2012 → Dec 13 2012 |
ASJC Scopus subject areas
- Control and Systems Engineering
- Modeling and Simulation
- Control and Optimization