Stochastic gradient descent in continuous time: A central limit theorem

Justin Sirignano, Konstantinos Spiliopoulos

Research output: Contribution to journalArticlepeer-review

Abstract

Stochastic gradient descent in continuous time (SGDCT) provides a computa-tionally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem for strongly convex objective functions and, under slightly stronger conditions, for non-convex objective functions as well. An Lp convergence rate is also proven for the algorithm in the strongly convex case. The mathematical analysis lies at the intersection of stochastic analysis and statistical learning.

Original languageEnglish (US)
Pages (from-to)124-151
Number of pages28
JournalStochastic Systems
Volume10
Issue number2
DOIs
StatePublished - Jun 2020

Keywords

  • Central limit theorem
  • Machine learning
  • Statistical learning
  • Stochastic differential equations
  • Stochastic gradient descent

ASJC Scopus subject areas

  • Management Science and Operations Research
  • Statistics, Probability and Uncertainty
  • Modeling and Simulation
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Stochastic gradient descent in continuous time: A central limit theorem'. Together they form a unique fingerprint.

Cite this