PIPE-SGD: A decentralized pipelined SGD framework for distributed deep net training

Youjie Li, Mingchao Yu, Songze Li, Salman Avestimehr, Nam Sung Kim, Alexander Schwing

Research output: Contribution to journalConference article

Abstract

Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands. Classical distributed approaches, synchronous or asynchronous, are based on the parameter server architecture, i.e., worker nodes compute gradients which are communicated to the parameter server while updated parameters are returned. Recently, distributed training with AllReduce operations gained popularity as well. While many of those operations seem appealing, little is reported about wall-clock training time improvements. In this paper, we carefully analyze the AllReduce based setup, propose timing models which include network latency, bandwidth, cluster size and compute time, and demonstrate that a pipelined training with a width of two combines the best of both synchronous and asynchronous training. Specifically, for a setup consisting of a four-node GPU cluster we show wall-clock time training improvements of up to 5.4× compared to conventional approaches.

Original languageEnglish (US)
Pages (from-to)8045-8056
Number of pages12
JournalAdvances in Neural Information Processing Systems
Volume2018-December
StatePublished - Jan 1 2018
Event32nd Conference on Neural Information Processing Systems, NeurIPS 2018 - Montreal, Canada
Duration: Dec 2 2018Dec 8 2018

Fingerprint

Clocks
Servers
Bandwidth
Data storage equipment
Graphics processing unit

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

PIPE-SGD : A decentralized pipelined SGD framework for distributed deep net training. / Li, Youjie; Yu, Mingchao; Li, Songze; Avestimehr, Salman; Kim, Nam Sung; Schwing, Alexander.

In: Advances in Neural Information Processing Systems, Vol. 2018-December, 01.01.2018, p. 8045-8056.

Research output: Contribution to journalConference article

Li, Youjie ; Yu, Mingchao ; Li, Songze ; Avestimehr, Salman ; Kim, Nam Sung ; Schwing, Alexander. / PIPE-SGD : A decentralized pipelined SGD framework for distributed deep net training. In: Advances in Neural Information Processing Systems. 2018 ; Vol. 2018-December. pp. 8045-8056.
@article{5f5aeba42ddf40f7955a3b4660f157f1,
title = "PIPE-SGD: A decentralized pipelined SGD framework for distributed deep net training",
abstract = "Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands. Classical distributed approaches, synchronous or asynchronous, are based on the parameter server architecture, i.e., worker nodes compute gradients which are communicated to the parameter server while updated parameters are returned. Recently, distributed training with AllReduce operations gained popularity as well. While many of those operations seem appealing, little is reported about wall-clock training time improvements. In this paper, we carefully analyze the AllReduce based setup, propose timing models which include network latency, bandwidth, cluster size and compute time, and demonstrate that a pipelined training with a width of two combines the best of both synchronous and asynchronous training. Specifically, for a setup consisting of a four-node GPU cluster we show wall-clock time training improvements of up to 5.4× compared to conventional approaches.",
author = "Youjie Li and Mingchao Yu and Songze Li and Salman Avestimehr and Kim, {Nam Sung} and Alexander Schwing",
year = "2018",
month = "1",
day = "1",
language = "English (US)",
volume = "2018-December",
pages = "8045--8056",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",

}

TY - JOUR

T1 - PIPE-SGD

T2 - A decentralized pipelined SGD framework for distributed deep net training

AU - Li, Youjie

AU - Yu, Mingchao

AU - Li, Songze

AU - Avestimehr, Salman

AU - Kim, Nam Sung

AU - Schwing, Alexander

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands. Classical distributed approaches, synchronous or asynchronous, are based on the parameter server architecture, i.e., worker nodes compute gradients which are communicated to the parameter server while updated parameters are returned. Recently, distributed training with AllReduce operations gained popularity as well. While many of those operations seem appealing, little is reported about wall-clock training time improvements. In this paper, we carefully analyze the AllReduce based setup, propose timing models which include network latency, bandwidth, cluster size and compute time, and demonstrate that a pipelined training with a width of two combines the best of both synchronous and asynchronous training. Specifically, for a setup consisting of a four-node GPU cluster we show wall-clock time training improvements of up to 5.4× compared to conventional approaches.

AB - Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands. Classical distributed approaches, synchronous or asynchronous, are based on the parameter server architecture, i.e., worker nodes compute gradients which are communicated to the parameter server while updated parameters are returned. Recently, distributed training with AllReduce operations gained popularity as well. While many of those operations seem appealing, little is reported about wall-clock training time improvements. In this paper, we carefully analyze the AllReduce based setup, propose timing models which include network latency, bandwidth, cluster size and compute time, and demonstrate that a pipelined training with a width of two combines the best of both synchronous and asynchronous training. Specifically, for a setup consisting of a four-node GPU cluster we show wall-clock time training improvements of up to 5.4× compared to conventional approaches.

UR - http://www.scopus.com/inward/record.url?scp=85064846369&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85064846369&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85064846369

VL - 2018-December

SP - 8045

EP - 8056

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

ER -