TY - JOUR
T1 - MPI on millions of cores
AU - Balaji, Pavan
AU - Buntinas, Darius
AU - Goodell, David
AU - Gropp, William
AU - Hoefler, Torsten
AU - Kumar, Sameer
AU - Lusk, Ewing
AU - Thakur, Rajeev
AU - Träff, Jesper Larsson
N1 - Funding Information:
We thank the members of the MPI Forum who participated in helpful discussions of the presented topics. We also thank the anonymous reviewers for comments that improved the manuscript. This work was supported in part by the Office of Advanced Scientific Computing Research, Office of Science, U.S. Department of Energy, under Contract DE-AC02-06CH11357 and award DE-FG02-08ER25835, and in part by the National Science Foundation award 0837719.
PY - 2011/3
Y1 - 2011/3
N2 - Petascale parallel computers with more than a million processing cores are expected to be available in a couple of years. Although MPI is the dominant programming interface today for large-scale systems that at the highest end already have close to 300,000 processors, a challenging question to both researchers and users is whether MPI will scale to processor and core counts in the millions. In this paper, we examine the issue of scalability of MPI to very large systems. We first examine the MPI specification itself and discuss areas with scalability concerns and how they can be overcome. We then investigate issues that an MPI implementation must address in order to be scalable. To illustrate the issues, we ran a number of simple experiments to measure MPI memory consumption at scale up to 131,072 processes, or 80%, of the IBM Blue Gene/P system at Argonne National Laboratory. Based on the results, we identified nonscalable aspects of the MPI implementation and found ways to tune it to reduce its memory footprint. We also briefly discuss issues in application scalability to large process counts and features of MPI that enable the use of other techniques to alleviate scalability limitations in applications.
AB - Petascale parallel computers with more than a million processing cores are expected to be available in a couple of years. Although MPI is the dominant programming interface today for large-scale systems that at the highest end already have close to 300,000 processors, a challenging question to both researchers and users is whether MPI will scale to processor and core counts in the millions. In this paper, we examine the issue of scalability of MPI to very large systems. We first examine the MPI specification itself and discuss areas with scalability concerns and how they can be overcome. We then investigate issues that an MPI implementation must address in order to be scalable. To illustrate the issues, we ran a number of simple experiments to measure MPI memory consumption at scale up to 131,072 processes, or 80%, of the IBM Blue Gene/P system at Argonne National Laboratory. Based on the results, we identified nonscalable aspects of the MPI implementation and found ways to tune it to reduce its memory footprint. We also briefly discuss issues in application scalability to large process counts and features of MPI that enable the use of other techniques to alleviate scalability limitations in applications.
KW - MPI
KW - scalability
UR - http://www.scopus.com/inward/record.url?scp=79953084955&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=79953084955&partnerID=8YFLogxK
U2 - 10.1142/S0129626411000060
DO - 10.1142/S0129626411000060
M3 - Article
AN - SCOPUS:79953084955
SN - 0129-6264
VL - 21
SP - 45
EP - 60
JO - Parallel Processing Letters
JF - Parallel Processing Letters
IS - 1
ER -