Abstract
As the number of cores per node keeps increasing, it becomes increasingly important for MPI to leverage shared memory for intranode communication. This paper investigates the design and optimizations of MPI collectives for clusters of NUMA nodes. We develop performance models for collective communication using shared memory, and we develop several algorithms for various collectives. Experiments are conducted on both Xeon X5650 and Opteron 6100 InfiniBand clusters. The measurements agree with the model and indicate that different algorithms dominate for short vectors and long vectors. We compare our shared-memory allreduce with several traditional MPI implementations - Open MPI, MPICH2, and MVAPICH2 - that utilize system shared memory to facilitate interprocess communication. On a 16-node Xeon cluster and 8-node Opteron cluster, our implementation achieves on average 2.5X and 2.3X speedup over MVAPICH2, respectively. Our techniques enable an efficient implementation of collective operations on future multi- and manycore systems.
Original language | English (US) |
---|---|
Pages | 85-96 |
Number of pages | 12 |
DOIs | |
State | Published - 2013 |
Event | 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2013 - New York, NY, United States Duration: Jun 17 2013 → Jun 21 2013 |
Other
Other | 22nd ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC 2013 |
---|---|
Country/Territory | United States |
City | New York, NY |
Period | 6/17/13 → 6/21/13 |
Keywords
- MPI
- MPI-allreduce
- NUMA
- collective communication
- multithreading
ASJC Scopus subject areas
- Software