Adaptive sequential learning

Craig Wilson, Venugopal Veeravalli

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

A framework for learning a sequence of slowly changing tasks, where the parameters of the learning algorithm are obtained by minimizing a loss function to a desired accuracy using optimization algorithms such as stochastic gradient descent (SGD) is considered. The tasks change slowly in the sense that the optimum values of the learning algorithm parameters change at a bounded rate. An adaptive sequential learning algorithm is developed to solve such a slowly varying sequence of tasks. The adaptive sequential learning algorithm is extended to handle cross validation and a cost based approach to selecting the number of samples used to compute approximate solutions. Experiments with synthetic and real data are used to validate theoretical results.

Original languageEnglish (US)
Title of host publicationConference Record of the 50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016
EditorsMichael B. Matthews
PublisherIEEE Computer Society
Pages326-330
Number of pages5
ISBN (Electronic)9781538639542
DOIs
StatePublished - Mar 1 2017
Event50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016 - Pacific Grove, United States
Duration: Nov 6 2016Nov 9 2016

Publication series

NameConference Record - Asilomar Conference on Signals, Systems and Computers
ISSN (Print)1058-6393

Other

Other50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016
Country/TerritoryUnited States
CityPacific Grove
Period11/6/1611/9/16

Keywords

  • adaptive algorithms
  • gradient methods
  • machine learning
  • stochastic optimization

ASJC Scopus subject areas

  • Signal Processing
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Adaptive sequential learning'. Together they form a unique fingerprint.

Cite this