TY - GEN
T1 - Active and Adaptive Sequential Learning with per Time-step Excess Risk Guarantees
AU - Bu, Yuheng
AU - Lu, Jiaxun
AU - Veeravalli, Venugopal V.
PY - 2019/11
Y1 - 2019/11
N2 - We consider solving a sequence of machine learning problems that vary in a bounded manner from one time-step to the next. To solve these problems in an accurate and data-efficient way, we propose an active and adaptive learning framework, in which we actively query the labels of the most informative samples from an unlabeled data pool, and adapt to the change by utilizing the information acquired in the previous steps. Our goal is to satisfy a pre-specified bound on the excess risk at each time-step. We first design the active querying algorithm by minimizing the excess risk using stochastic gradient descent in the maximum likelihood estimation setting. Then, we propose a sample size selection rule that minimizes the number of samples by adapting to the change in the learning problems, while satisfying the required bound on excess risk at each time-step. Based on the actively queried samples, we construct an estimator for the change in the learning problems, which we prove to be an asymptotically tight upper bound of its true value. We validate our algorithm and theory through experiments with real data.
AB - We consider solving a sequence of machine learning problems that vary in a bounded manner from one time-step to the next. To solve these problems in an accurate and data-efficient way, we propose an active and adaptive learning framework, in which we actively query the labels of the most informative samples from an unlabeled data pool, and adapt to the change by utilizing the information acquired in the previous steps. Our goal is to satisfy a pre-specified bound on the excess risk at each time-step. We first design the active querying algorithm by minimizing the excess risk using stochastic gradient descent in the maximum likelihood estimation setting. Then, we propose a sample size selection rule that minimizes the number of samples by adapting to the change in the learning problems, while satisfying the required bound on excess risk at each time-step. Based on the actively queried samples, we construct an estimator for the change in the learning problems, which we prove to be an asymptotically tight upper bound of its true value. We validate our algorithm and theory through experiments with real data.
UR - http://www.scopus.com/inward/record.url?scp=85083312118&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85083312118&partnerID=8YFLogxK
U2 - 10.1109/IEEECONF44664.2019.9048932
DO - 10.1109/IEEECONF44664.2019.9048932
M3 - Conference contribution
AN - SCOPUS:85083312118
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 1606
EP - 1610
BT - Conference Record - 53rd Asilomar Conference on Circuits, Systems and Computers, ACSSC 2019
A2 - Matthews, Michael B.
PB - IEEE Computer Society
T2 - 53rd Asilomar Conference on Circuits, Systems and Computers, ACSSC 2019
Y2 - 3 November 2019 through 6 November 2019
ER -