## Abstract

Stochastic gradient descent in continuous time (SGDCT) provides a computa-tionally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem for strongly convex objective functions and, under slightly stronger conditions, for non-convex objective functions as well. An L^{p} convergence rate is also proven for the algorithm in the strongly convex case. The mathematical analysis lies at the intersection of stochastic analysis and statistical learning.

Original language | English (US) |
---|---|

Pages (from-to) | 124-151 |

Number of pages | 28 |

Journal | Stochastic Systems |

Volume | 10 |

Issue number | 2 |

DOIs | |

State | Published - Jun 2020 |

## Keywords

- Central limit theorem
- Machine learning
- Statistical learning
- Stochastic differential equations
- Stochastic gradient descent

## ASJC Scopus subject areas

- Statistics and Probability
- Statistics, Probability and Uncertainty
- Modeling and Simulation
- Management Science and Operations Research