Stochastic Learning Rate Optimization in the Stochastic Approximation and Online Learning Settings

Theodoros Mamalis, Dusan Stipanovic, Petros G Voulgaris

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this work, multiplicative stochasticity is applied to the learning rate of stochastic optimization algorithms, giving rise to stochastic learning-rate schemes. In-expectation theoretical convergence results of Stochastic Gradient Descent equipped with this novel stochastic learning rate scheme under the stochastic setting, as well as convergence results under the online optimization settings are provided. Empirical results consider the case of an adaptively uniformly distributed multiplicative stochasticity equipped with a stochastic learning rate.

Original languageEnglish (US)
Title of host publication2022 American Control Conference, ACC 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4286-4291
Number of pages6
ISBN (Electronic)9781665451963
DOIs
StatePublished - 2022
Externally publishedYes
Event2022 American Control Conference, ACC 2022 - Atlanta, United States
Duration: Jun 8 2022Jun 10 2022

Publication series

NameProceedings of the American Control Conference
Volume2022-June
ISSN (Print)0743-1619

Conference

Conference2022 American Control Conference, ACC 2022
Country/TerritoryUnited States
CityAtlanta
Period6/8/226/10/22

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Stochastic Learning Rate Optimization in the Stochastic Approximation and Online Learning Settings'. Together they form a unique fingerprint.

Cite this