Convergence of the Iterates in Mirror Descent Methods

Thinh T. Doan, Subhonmesh Bose, D. Hoa Nguyen, Carolyn L Beck

Research output: Contribution to journalArticle

Abstract

We consider centralized and distributed mirror descent (MD) algorithms over a finite-dimensional Hilbert space, and prove that the problem variables converge to an optimizer of a possibly nonsmooth function when the step sizes are square summable but not summable. Prior literature has focused on the convergence of the function value to its optimum. However, applications from distributed optimization and learning in games require the convergence of the variables to an optimizer, which is generally not guaranteed without assuming strong convexity of the objective function. We provide numerical simulations comparing entropic MD and standard subgradient methods for the robust regression problem.

Original languageEnglish (US)
Pages (from-to)114-119
Number of pages6
JournalIEEE Control Systems Letters
Volume3
Issue number1
DOIs
StatePublished - Jan 2019

Fingerprint

Descent Method
Iterate
Mirror
Mirrors
Distributed Optimization
Subgradient Method
Robust Regression
Nonsmooth Function
Descent Algorithm
Descent
Value Function
Convexity
Objective function
Hilbert space
Hilbert spaces
Game
Converge
Numerical Simulation
Computer simulation
Standards

Keywords

  • Distributed optimization
  • mirror descent

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Control and Optimization

Cite this

Convergence of the Iterates in Mirror Descent Methods. / Doan, Thinh T.; Bose, Subhonmesh; Nguyen, D. Hoa; Beck, Carolyn L.

In: IEEE Control Systems Letters, Vol. 3, No. 1, 01.2019, p. 114-119.

Research output: Contribution to journalArticle

@article{63cbd2d352dc4ba3a064606f9f722cec,
title = "Convergence of the Iterates in Mirror Descent Methods",
abstract = "We consider centralized and distributed mirror descent (MD) algorithms over a finite-dimensional Hilbert space, and prove that the problem variables converge to an optimizer of a possibly nonsmooth function when the step sizes are square summable but not summable. Prior literature has focused on the convergence of the function value to its optimum. However, applications from distributed optimization and learning in games require the convergence of the variables to an optimizer, which is generally not guaranteed without assuming strong convexity of the objective function. We provide numerical simulations comparing entropic MD and standard subgradient methods for the robust regression problem.",
keywords = "Distributed optimization, mirror descent",
author = "Doan, {Thinh T.} and Subhonmesh Bose and Nguyen, {D. Hoa} and Beck, {Carolyn L}",
year = "2019",
month = "1",
doi = "10.1109/LCSYS.2018.2854889",
language = "English (US)",
volume = "3",
pages = "114--119",
journal = "IEEE Control Systems Letters",
issn = "2475-1456",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "1",

}

TY - JOUR

T1 - Convergence of the Iterates in Mirror Descent Methods

AU - Doan, Thinh T.

AU - Bose, Subhonmesh

AU - Nguyen, D. Hoa

AU - Beck, Carolyn L

PY - 2019/1

Y1 - 2019/1

N2 - We consider centralized and distributed mirror descent (MD) algorithms over a finite-dimensional Hilbert space, and prove that the problem variables converge to an optimizer of a possibly nonsmooth function when the step sizes are square summable but not summable. Prior literature has focused on the convergence of the function value to its optimum. However, applications from distributed optimization and learning in games require the convergence of the variables to an optimizer, which is generally not guaranteed without assuming strong convexity of the objective function. We provide numerical simulations comparing entropic MD and standard subgradient methods for the robust regression problem.

AB - We consider centralized and distributed mirror descent (MD) algorithms over a finite-dimensional Hilbert space, and prove that the problem variables converge to an optimizer of a possibly nonsmooth function when the step sizes are square summable but not summable. Prior literature has focused on the convergence of the function value to its optimum. However, applications from distributed optimization and learning in games require the convergence of the variables to an optimizer, which is generally not guaranteed without assuming strong convexity of the objective function. We provide numerical simulations comparing entropic MD and standard subgradient methods for the robust regression problem.

KW - Distributed optimization

KW - mirror descent

UR - http://www.scopus.com/inward/record.url?scp=85057637481&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85057637481&partnerID=8YFLogxK

U2 - 10.1109/LCSYS.2018.2854889

DO - 10.1109/LCSYS.2018.2854889

M3 - Article

AN - SCOPUS:85057637481

VL - 3

SP - 114

EP - 119

JO - IEEE Control Systems Letters

JF - IEEE Control Systems Letters

SN - 2475-1456

IS - 1

ER -