NUMA-aware shared-memory collective communication for MPI

Shigang Li, Torsten Hoefler, Marc Snir

Research output: Contribution to conferencePaper

Abstract

As the number of cores per node keeps increasing, it becomes increasingly important for MPI to leverage shared memory for intranode communication. This paper investigates the design and optimizations of MPI collectives for clusters of NUMA nodes. We develop performance models for collective communication using shared memory, and we develop several algorithms for various collectives. Experiments are conducted on both Xeon X5650 and Opteron 6100 InfiniBand clusters. The measurements agree with the model and indicate that different algorithms dominate for short vectors and long vectors. We compare our shared-memory allreduce with several traditional MPI implementations - Open MPI, MPICH2, and MVAPICH2 - that utilize system shared memory to facilitate interprocess communication. On a 16-node Xeon cluster and 8-node Opteron cluster, our implementation achieves on average 2.5X and 2.3X speedup over MVAPICH2, respectively. Our techniques enable an efficient implementation of collective operations on future multi- and manycore systems.

Original languageEnglish (US)
Pages85-96
Number of pages12
DOIs
StatePublished - Jul 17 2013
Event22nd ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2013 - New York, NY, United States
Duration: Jun 17 2013Jun 21 2013

Other

Other22nd ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2013
CountryUnited States
CityNew York, NY
Period6/17/136/21/13

Fingerprint

Data storage equipment
Communication
Experiments

Keywords

  • MPI
  • MPI-allreduce
  • NUMA
  • collective communication
  • multithreading

ASJC Scopus subject areas

  • Software

Cite this

Li, S., Hoefler, T., & Snir, M. (2013). NUMA-aware shared-memory collective communication for MPI. 85-96. Paper presented at 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2013, New York, NY, United States. https://doi.org/10.1145/2462902.2462903

NUMA-aware shared-memory collective communication for MPI. / Li, Shigang; Hoefler, Torsten; Snir, Marc.

2013. 85-96 Paper presented at 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2013, New York, NY, United States.

Research output: Contribution to conferencePaper

Li, S, Hoefler, T & Snir, M 2013, 'NUMA-aware shared-memory collective communication for MPI' Paper presented at 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2013, New York, NY, United States, 6/17/13 - 6/21/13, pp. 85-96. https://doi.org/10.1145/2462902.2462903
Li S, Hoefler T, Snir M. NUMA-aware shared-memory collective communication for MPI. 2013. Paper presented at 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2013, New York, NY, United States. https://doi.org/10.1145/2462902.2462903
Li, Shigang ; Hoefler, Torsten ; Snir, Marc. / NUMA-aware shared-memory collective communication for MPI. Paper presented at 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2013, New York, NY, United States.12 p.
@conference{5dd46d8ffd16456da4ba5be835b8dab3,
title = "NUMA-aware shared-memory collective communication for MPI",
abstract = "As the number of cores per node keeps increasing, it becomes increasingly important for MPI to leverage shared memory for intranode communication. This paper investigates the design and optimizations of MPI collectives for clusters of NUMA nodes. We develop performance models for collective communication using shared memory, and we develop several algorithms for various collectives. Experiments are conducted on both Xeon X5650 and Opteron 6100 InfiniBand clusters. The measurements agree with the model and indicate that different algorithms dominate for short vectors and long vectors. We compare our shared-memory allreduce with several traditional MPI implementations - Open MPI, MPICH2, and MVAPICH2 - that utilize system shared memory to facilitate interprocess communication. On a 16-node Xeon cluster and 8-node Opteron cluster, our implementation achieves on average 2.5X and 2.3X speedup over MVAPICH2, respectively. Our techniques enable an efficient implementation of collective operations on future multi- and manycore systems.",
keywords = "MPI, MPI-allreduce, NUMA, collective communication, multithreading",
author = "Shigang Li and Torsten Hoefler and Marc Snir",
year = "2013",
month = "7",
day = "17",
doi = "10.1145/2462902.2462903",
language = "English (US)",
pages = "85--96",
note = "22nd ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2013 ; Conference date: 17-06-2013 Through 21-06-2013",

}

TY - CONF

T1 - NUMA-aware shared-memory collective communication for MPI

AU - Li, Shigang

AU - Hoefler, Torsten

AU - Snir, Marc

PY - 2013/7/17

Y1 - 2013/7/17

N2 - As the number of cores per node keeps increasing, it becomes increasingly important for MPI to leverage shared memory for intranode communication. This paper investigates the design and optimizations of MPI collectives for clusters of NUMA nodes. We develop performance models for collective communication using shared memory, and we develop several algorithms for various collectives. Experiments are conducted on both Xeon X5650 and Opteron 6100 InfiniBand clusters. The measurements agree with the model and indicate that different algorithms dominate for short vectors and long vectors. We compare our shared-memory allreduce with several traditional MPI implementations - Open MPI, MPICH2, and MVAPICH2 - that utilize system shared memory to facilitate interprocess communication. On a 16-node Xeon cluster and 8-node Opteron cluster, our implementation achieves on average 2.5X and 2.3X speedup over MVAPICH2, respectively. Our techniques enable an efficient implementation of collective operations on future multi- and manycore systems.

AB - As the number of cores per node keeps increasing, it becomes increasingly important for MPI to leverage shared memory for intranode communication. This paper investigates the design and optimizations of MPI collectives for clusters of NUMA nodes. We develop performance models for collective communication using shared memory, and we develop several algorithms for various collectives. Experiments are conducted on both Xeon X5650 and Opteron 6100 InfiniBand clusters. The measurements agree with the model and indicate that different algorithms dominate for short vectors and long vectors. We compare our shared-memory allreduce with several traditional MPI implementations - Open MPI, MPICH2, and MVAPICH2 - that utilize system shared memory to facilitate interprocess communication. On a 16-node Xeon cluster and 8-node Opteron cluster, our implementation achieves on average 2.5X and 2.3X speedup over MVAPICH2, respectively. Our techniques enable an efficient implementation of collective operations on future multi- and manycore systems.

KW - MPI

KW - MPI-allreduce

KW - NUMA

KW - collective communication

KW - multithreading

UR - http://www.scopus.com/inward/record.url?scp=84880076637&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84880076637&partnerID=8YFLogxK

U2 - 10.1145/2462902.2462903

DO - 10.1145/2462902.2462903

M3 - Paper

AN - SCOPUS:84880076637

SP - 85

EP - 96

ER -