Abstract
A framework previously introduced in Wilson et al. (2018) for solving a sequence of stochastic optimization problems with bounded changes in the minimizers is extended and applied to machine learning problems such as regression and classification. The stochastic optimization problems arising in these machine learning problems are solved using algorithms such as stochastic gradient descent (SGD). A method based on estimates of the change in the minimizers and properties of the optimization algorithm is introduced for adaptively selecting the number of samples at each time step to ensure that the excess risk—that is, the expected gap between the loss achieved by the approximate minimizer produced by the optimization algorithm and the exact minimizer—does not exceed a target level. A bound is developed to show that the estimate of the change in the minimizers is non trivial provided that the excess risk is small enough. Extensions relevant to the machine learning setting are considered, including a costbased approach to select the number of samples with a cost budget over a fixed horizon, and an approach to applying crossvalidation for model selection. Finally, experiments with synthetic and real data are used to validate the algorithms.
Original language  English (US) 

Pages (fromto)  545568 
Number of pages  24 
Journal  Sequential Analysis 
Volume  38 
Issue number  4 
DOIs  
State  Published  Oct 2 2019 
Keywords
 62L05
 62L10
 68T05
 Adaptive learning
 crossvalidation
 excess risk
 sequential learning
 stochastic gradient descent
ASJC Scopus subject areas
 Statistics and Probability
 Modeling and Simulation
Fingerprint
Dive into the research topics of 'Adaptive sequential machine learning'. Together they form a unique fingerprint.Prizes

Abraham Wald Prize in Sequential Analysis
Veeravalli, Venugopal Varadachari (Recipient), 2023
Prize: Prize/Award