Stochastic Gradient Descent in Continuous time

Justin Sirignano, Konstantinos Spiliopoulos

Research output: Contribution to journalArticle

Abstract

Stochastic gradient descent in continuous time (SGDCT) provides a computationally eficient method for the statistical learning of continuous-Time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. SGDCT performs an online parameter update in continuous time with the parameter updatesθt satisfying a stochastic diffierential equation. We prove that limδ rg(θ) = 0, where g is anatural objective function for the estimation of the continuous-Time dynamics. The convergence proof leverages ergodicity by using an appropriate Poisson equation to help describe the evolution of the parameters for large times. For certain continuous-Time problems, SGDCT has some promising advantages compared to a traditional stochastic gradient descent algorithm. This paper mainly focuses on applications in finance, such as model estimation for stocks, bonds, interest rates, and financial derivatives. SGDCT can also be used for the optimization of high-dimensional continuous-time models, such as American options. As an example application, SGDCT is combined with a deep neural network to price high-dimensional American options (up to 100 dimensions).

Original languageEnglish (US)
Pages (from-to)933-961
Number of pages29
JournalSIAM Journal on Financial Mathematics
Volume8
Issue number1
DOIs
StatePublished - Jan 1 2017

Keywords

  • American options
  • Deep learning
  • Machine learning
  • Statistical learning
  • Stochastic difierential equations
  • Stochastic gradient descent

ASJC Scopus subject areas

  • Numerical Analysis
  • Finance
  • Applied Mathematics

Fingerprint Dive into the research topics of 'Stochastic Gradient Descent in Continuous time'. Together they form a unique fingerprint.

  • Cite this