TY - JOUR
T1 - A comprehensive linear speedup analysis for asynchronous stochastic parallel optimization from zeroth-order to first-order
AU - Lian, Xiangru
AU - Zhang, Huan
AU - Hsieh, Cho Jui
AU - Huang, Yijun
AU - Liu, Ji
N1 - Funding Information:
This project is in part supported by the NSF grant CNS-1548078. We especially thank Chen-Tse Tsai for providing the code and data for the Yahoo Music Competition.
Publisher Copyright:
© 2016 NIPS Foundation - All Rights Reserved.
PY - 2016
Y1 - 2016
N2 - Asynchronous parallel optimization received substantial successes and extensive attention recently. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. Our result recovers or improves existing analysis on special cases, provides more insights for understanding the asynchronous parallel behaviors, and suggests a novel asynchronous parallel zeroth order method for the first time. Our experiments provide novel applications of the proposed asynchronous parallel zeroth order method on hyper parameter tuning and model blending problems.
AB - Asynchronous parallel optimization received substantial successes and extensive attention recently. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. Our result recovers or improves existing analysis on special cases, provides more insights for understanding the asynchronous parallel behaviors, and suggests a novel asynchronous parallel zeroth order method for the first time. Our experiments provide novel applications of the proposed asynchronous parallel zeroth order method on hyper parameter tuning and model blending problems.
UR - http://www.scopus.com/inward/record.url?scp=85014547151&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85014547151&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85014547151
SN - 1049-5258
SP - 3062
EP - 3070
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 30th Annual Conference on Neural Information Processing Systems, NIPS 2016
Y2 - 5 December 2016 through 10 December 2016
ER -