A comprehensive linear speedup analysis for asynchronous stochastic parallel optimization from zeroth-order to first-order

Xiangru Lian, Huan Zhang, Cho Jui Hsieh, Yijun Huang, Ji Liu

Research output: Contribution to journalConference articlepeer-review

Abstract

Asynchronous parallel optimization received substantial successes and extensive attention recently. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. Our result recovers or improves existing analysis on special cases, provides more insights for understanding the asynchronous parallel behaviors, and suggests a novel asynchronous parallel zeroth order method for the first time. Our experiments provide novel applications of the proposed asynchronous parallel zeroth order method on hyper parameter tuning and model blending problems.

Original languageEnglish (US)
Pages (from-to)3062-3070
Number of pages9
JournalAdvances in Neural Information Processing Systems
StatePublished - 2016
Externally publishedYes
Event30th Annual Conference on Neural Information Processing Systems, NIPS 2016 - Barcelona, Spain
Duration: Dec 5 2016Dec 10 2016

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'A comprehensive linear speedup analysis for asynchronous stochastic parallel optimization from zeroth-order to first-order'. Together they form a unique fingerprint.

Cite this