TY - GEN
T1 - To push or to pull
T2 - 26th ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2017
AU - Besta, Maciej
AU - Podstawski, Michał
AU - Groner, Linus
AU - Solomonik, Edgar
AU - Hoefler, Torsten
PY - 2017/6/26
Y1 - 2017/6/26
N2 - We reduce the cost of communication and synchronization in graph processing by analyzing the fastest way to process graphs: pushing the updates to a shared state or pulling the updates to a private state. We investigate the applicability of this push-pull dichotomy to various algorithms and its impact on complexity, performance, and the amount of used locks, atomics, and reads/writes. We consider 11 graph algorithms, 3 programming models, 2 graph abstractions, and various families of graphs. The conducted analysis illustrates surprising differences between push and pull variants of different algorithms in performance, speed of convergence, and code complexity; the insights are backed up by performance data from hardware counters. We use these findings to illustrate which variant is faster for each algorithm and to develop generic strategies that enable even higher speedups. Our insights can be used to accelerate graph processing engines or libraries on both massively-parallel shared-memory machines as well as distributed-memory systems.
AB - We reduce the cost of communication and synchronization in graph processing by analyzing the fastest way to process graphs: pushing the updates to a shared state or pulling the updates to a private state. We investigate the applicability of this push-pull dichotomy to various algorithms and its impact on complexity, performance, and the amount of used locks, atomics, and reads/writes. We consider 11 graph algorithms, 3 programming models, 2 graph abstractions, and various families of graphs. The conducted analysis illustrates surprising differences between push and pull variants of different algorithms in performance, speed of convergence, and code complexity; the insights are backed up by performance data from hardware counters. We use these findings to illustrate which variant is faster for each algorithm and to develop generic strategies that enable even higher speedups. Our insights can be used to accelerate graph processing engines or libraries on both massively-parallel shared-memory machines as well as distributed-memory systems.
UR - http://www.scopus.com/inward/record.url?scp=85025834651&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85025834651&partnerID=8YFLogxK
U2 - 10.1145/3078597.3078616
DO - 10.1145/3078597.3078616
M3 - Conference contribution
AN - SCOPUS:85025834651
T3 - HPDC 2017 - Proceedings of the 26th International Symposium on High-Performance Parallel and Distributed Computing
SP - 93
EP - 104
BT - HPDC 2017 - Proceedings of the 26th International Symposium on High-Performance Parallel and Distributed Computing
PB - Association for Computing Machinery, Inc
Y2 - 26 June 2017 through 30 June 2017
ER -