Utilizing second order information in minibatch stochastic variance reduced proximal iterations

Jialei Wang, Tong Zhang

Research output: Contribution to journalArticlepeer-review

Abstract

We present a novel minibatch stochastic optimization method for empirical risk minimization of linear predictors. The method efficiently leverages both sub-sampled first-order and higher-order information, by incorporating variance-reduction and acceleration techniques. We prove improved iteration complexity over state-of-the-art methods under suitable conditions. In particular, the approach enjoys global fast convergence for quadratic convex objectives and local fast convergence for general convex objectives. Experiments are provided to demonstrate the empirical advantage of the proposed method over existing approaches in the literature.

Original languageEnglish (US)
JournalJournal of Machine Learning Research
Volume20
StatePublished - Feb 1 2019
Externally publishedYes

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Utilizing second order information in minibatch stochastic variance reduced proximal iterations'. Together they form a unique fingerprint.

Cite this