TY - JOUR
T1 - PIPE-SGD
T2 - 32nd Conference on Neural Information Processing Systems, NeurIPS 2018
AU - Li, Youjie
AU - Yu, Mingchao
AU - Li, Songze
AU - Avestimehr, Salman
AU - Kim, Nam Sung
AU - Schwing, Alexander
N1 - Funding Information:
This work is supported in part by grants from NSF (IIS 17-18221, CNS 17-05047, CNS 15-57244, CCF-1763673 and CCF-1703575). This work is also supported by 3M and the IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR). Besides, this material is based in part upon work supported by Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001117C0053. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
PY - 2018
Y1 - 2018
N2 - Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands. Classical distributed approaches, synchronous or asynchronous, are based on the parameter server architecture, i.e., worker nodes compute gradients which are communicated to the parameter server while updated parameters are returned. Recently, distributed training with AllReduce operations gained popularity as well. While many of those operations seem appealing, little is reported about wall-clock training time improvements. In this paper, we carefully analyze the AllReduce based setup, propose timing models which include network latency, bandwidth, cluster size and compute time, and demonstrate that a pipelined training with a width of two combines the best of both synchronous and asynchronous training. Specifically, for a setup consisting of a four-node GPU cluster we show wall-clock time training improvements of up to 5.4× compared to conventional approaches.
AB - Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands. Classical distributed approaches, synchronous or asynchronous, are based on the parameter server architecture, i.e., worker nodes compute gradients which are communicated to the parameter server while updated parameters are returned. Recently, distributed training with AllReduce operations gained popularity as well. While many of those operations seem appealing, little is reported about wall-clock training time improvements. In this paper, we carefully analyze the AllReduce based setup, propose timing models which include network latency, bandwidth, cluster size and compute time, and demonstrate that a pipelined training with a width of two combines the best of both synchronous and asynchronous training. Specifically, for a setup consisting of a four-node GPU cluster we show wall-clock time training improvements of up to 5.4× compared to conventional approaches.
UR - http://www.scopus.com/inward/record.url?scp=85064846369&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85064846369&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85064846369
VL - 2018-December
SP - 8045
EP - 8056
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
SN - 1049-5258
Y2 - 2 December 2018 through 8 December 2018
ER -