Convergence of the Iterates in Mirror Descent Methods

Thinh T. Doan, Subhonmesh Bose, D. Hoa Nguyen, Carolyn L. Beck

Research output: Contribution to journalArticlepeer-review

Abstract

We consider centralized and distributed mirror descent (MD) algorithms over a finite-dimensional Hilbert space, and prove that the problem variables converge to an optimizer of a possibly nonsmooth function when the step sizes are square summable but not summable. Prior literature has focused on the convergence of the function value to its optimum. However, applications from distributed optimization and learning in games require the convergence of the variables to an optimizer, which is generally not guaranteed without assuming strong convexity of the objective function. We provide numerical simulations comparing entropic MD and standard subgradient methods for the robust regression problem.

Original languageEnglish (US)
Pages (from-to)114-119
Number of pages6
JournalIEEE Control Systems Letters
Volume3
Issue number1
DOIs
StatePublished - Jan 2019

Keywords

  • Distributed optimization
  • mirror descent

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Control and Optimization

Fingerprint

Dive into the research topics of 'Convergence of the Iterates in Mirror Descent Methods'. Together they form a unique fingerprint.

Cite this