Abstract
We consider centralized and distributed mirror descent (MD) algorithms over a finite-dimensional Hilbert space, and prove that the problem variables converge to an optimizer of a possibly nonsmooth function when the step sizes are square summable but not summable. Prior literature has focused on the convergence of the function value to its optimum. However, applications from distributed optimization and learning in games require the convergence of the variables to an optimizer, which is generally not guaranteed without assuming strong convexity of the objective function. We provide numerical simulations comparing entropic MD and standard subgradient methods for the robust regression problem.
Original language | English (US) |
---|---|
Pages (from-to) | 114-119 |
Number of pages | 6 |
Journal | IEEE Control Systems Letters |
Volume | 3 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2019 |
Keywords
- Distributed optimization
- mirror descent
ASJC Scopus subject areas
- Control and Systems Engineering
- Control and Optimization