Improved MPI collectives for MPI processes in shared address spaces

Shigang Li, Torsten Hoefler, Chungjin Hu, Marc Snir

Research output: Contribution to journalArticlepeer-review

Abstract

As the number of cores per node keeps increasing, it becomes increasingly important for MPI to leverage shared memory for intranode communication. This paper investigates the design and optimization of MPI collectives for clusters of NUMA nodes. We develop performance models for collective communication using shared memory and we demonstrate several algorithms for various collectives. Experiments are conducted on both Xeon X5650 and Opteron 6100 InfiniBand clusters. The measurements agree with the model and indicate that different algorithms dominate for short vectors and long vectors. We compare our shared-memory allreduce with several MPI implementations—Open MPI, MPICH2, and MVAPICH2—that utilize system shared memory to facilitate interprocess communication. On a 16-node Xeon cluster and 8-node Opteron cluster, our implementation achieves on geometric average 2.3X and 2.1X speedup over the best MPI implementation, respectively. Our techniques enable an efficient implementation of collective operations on future multi- and manycore systems.

Original languageEnglish (US)
Pages (from-to)1139-1155
Number of pages17
JournalCluster Computing
Volume17
Issue number4
DOIs
StatePublished - Nov 15 2014

Keywords

  • Collective communication
  • MPI
  • MPI_Allreduce
  • Multithreading
  • NUMA

ASJC Scopus subject areas

  • Software
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Improved MPI collectives for MPI processes in shared address spaces'. Together they form a unique fingerprint.

Cite this