TY - GEN
T1 - Using Predictions in Online Optimization with Switching Costs
T2 - 2018 Annual American Control Conference, ACC 2018
AU - Li, Yingying
AU - Qu, Guannan
AU - Li, Na
N1 - Funding Information:
This work was supported by NSF CAREER 1553407 and ARPA-E NODES. Y. Li, G. Qu and N. Li are with the School of Engineering and Applied Sciences, Harvard University, 33 Oxford Street, Cambridge, MA 02138, USA (email: [email protected], [email protected], [email protected]).
Publisher Copyright:
© 2018 AACC.
PY - 2018/8/9
Y1 - 2018/8/9
N2 - This paper studies an online optimization problem with switching costs and a finite prediction window. We propose a computationally efficient algorithm, Receding Horizon Gradient Descent (RHGD), which only requires a finite number of gradient evaluations at each time. We show that both the dynamic regret and the competitive ratio of the algorithm decay exponentially fast with the length of the prediction window. Moreover, we provide a fundamental lower bound on dynamic regret for general online algorithms with a finite prediction window, and show that the dynamic regret of any online algorithm, even with more computation, decays at most exponentially when increasing prediction window size. This demonstrates that limited prediction information, instead of limited computational power, is the key constraint to performance in online decision making. Lastly, we present simulation results to test our algorithm numerically.
AB - This paper studies an online optimization problem with switching costs and a finite prediction window. We propose a computationally efficient algorithm, Receding Horizon Gradient Descent (RHGD), which only requires a finite number of gradient evaluations at each time. We show that both the dynamic regret and the competitive ratio of the algorithm decay exponentially fast with the length of the prediction window. Moreover, we provide a fundamental lower bound on dynamic regret for general online algorithms with a finite prediction window, and show that the dynamic regret of any online algorithm, even with more computation, decays at most exponentially when increasing prediction window size. This demonstrates that limited prediction information, instead of limited computational power, is the key constraint to performance in online decision making. Lastly, we present simulation results to test our algorithm numerically.
UR - http://www.scopus.com/inward/record.url?scp=85052583774&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85052583774&partnerID=8YFLogxK
U2 - 10.23919/ACC.2018.8431296
DO - 10.23919/ACC.2018.8431296
M3 - Conference contribution
AN - SCOPUS:85052583774
SN - 9781538654286
T3 - Proceedings of the American Control Conference
SP - 3008
EP - 3013
BT - 2018 Annual American Control Conference, ACC 2018
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 27 June 2018 through 29 June 2018
ER -