A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization

Minghan Yang, Andre Milzarek, Zaiwen Wen, Tong Zhang

Research output: Contribution to journalArticlepeer-review


In this paper, a novel stochastic extra-step quasi-Newton method is developed to solve a class of nonsmooth nonconvex composite optimization problems. We assume that the gradient of the smooth part of the objective function can only be approximated by stochastic oracles. The proposed method combines general stochastic higher order steps derived from an underlying proximal type fixed-point equation with additional stochastic proximal gradient steps to guarantee convergence. Based on suitable bounds on the step sizes, we establish global convergence to stationary points in expectation and an extension of the approach using variance reduction techniques is discussed. Motivated by large-scale and big data applications, we investigate a stochastic coordinate-type quasi-Newton scheme that allows to generate cheap and tractable stochastic higher order directions. Finally, numerical results on large-scale logistic regression and deep learning problems show that our proposed algorithm compares favorably with other state-of-the-art methods.

Original languageEnglish (US)
Pages (from-to)257-303
Number of pages47
JournalMathematical Programming
Issue number1-2
StatePublished - Jul 2022
Externally publishedYes


  • Global convergence
  • Nonsmooth stochastic optimization
  • Stochastic approximation
  • Stochastic higher order method
  • Stochastic quasi-Newton scheme

ASJC Scopus subject areas

  • Software
  • General Mathematics


Dive into the research topics of 'A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization'. Together they form a unique fingerprint.

Cite this