Efficient distributed learning with sparsity

Jialei Wang, Mladen KoIar, Nathan Srebro, Tong Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We propose a novel, efficient approach for distributed sparse learning with observations randomly partitioned across machines. In each round of the proposed method, worker machines compute the gradient of the loss on local data and the master machine solves a shifted regularized loss minimization problem. After a number of communication rounds that scales only logarithmically with the number of machines, and independent of other parameters of the problem, the proposed approach provably matches the estimation error bound of centralized methods.

Original languageEnglish (US)
Title of host publication34th International Conference on Machine Learning, ICML 2017
PublisherInternational Machine Learning Society (IMLS)
Pages5544-5563
Number of pages20
ISBN (Electronic)9781510855144
StatePublished - 2017
Externally publishedYes
Event34th International Conference on Machine Learning, ICML 2017 - Sydney, Australia
Duration: Aug 6 2017Aug 11 2017

Publication series

Name34th International Conference on Machine Learning, ICML 2017
Volume7

Other

Other34th International Conference on Machine Learning, ICML 2017
Country/TerritoryAustralia
CitySydney
Period8/6/178/11/17

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Human-Computer Interaction
  • Software

Fingerprint

Dive into the research topics of 'Efficient distributed learning with sparsity'. Together they form a unique fingerprint.

Cite this