TY - GEN
T1 - Architectural constraints to attain 1 exaflop/s for three scientific application classes
AU - Bhatele, Abhinav
AU - Jetley, Pritish
AU - Gahvari, Hormozd
AU - Wesolowski, Lukasz
AU - Gropp, William D
AU - Kale, Laxmikant V
PY - 2011
Y1 - 2011
N2 - The first Teraflop/s computer, the ASCI Red, became operational in 1997, and it took more than 11 years for a Petaflop/s performance machine, the IBM Roadrunner, to appear on the Top500 list. Efforts have begun to study the hardware and software challenges for building an exascale machine. It is important to understand and meet these challenges in order to attain Exaflop/s performance. This paper presents a feasibility study of three important application classes to formulate the constraints that these classes will impose on the machine architecture for achieving a sustained performance of 1 Exaflop/s. The application classes being considered in this paper are - classical molecular dynamics, cosmological simulations and unstructured grid computations (finite element solvers). We analyze the problem sizes required for representative algorithms in each class to achieve 1 Exaflop/s and the hardware requirements in terms of the network and memory. Based on the analysis for achieving an Exaflop/s, we also discuss the performance of these algorithms for much smaller problem sizes.
AB - The first Teraflop/s computer, the ASCI Red, became operational in 1997, and it took more than 11 years for a Petaflop/s performance machine, the IBM Roadrunner, to appear on the Top500 list. Efforts have begun to study the hardware and software challenges for building an exascale machine. It is important to understand and meet these challenges in order to attain Exaflop/s performance. This paper presents a feasibility study of three important application classes to formulate the constraints that these classes will impose on the machine architecture for achieving a sustained performance of 1 Exaflop/s. The application classes being considered in this paper are - classical molecular dynamics, cosmological simulations and unstructured grid computations (finite element solvers). We analyze the problem sizes required for representative algorithms in each class to achieve 1 Exaflop/s and the hardware requirements in terms of the network and memory. Based on the analysis for achieving an Exaflop/s, we also discuss the performance of these algorithms for much smaller problem sizes.
KW - application scalability
KW - cosmology
KW - exascale
KW - finite element methods
KW - molecular dynamics
KW - performance analysis
UR - http://www.scopus.com/inward/record.url?scp=80053290844&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=80053290844&partnerID=8YFLogxK
U2 - 10.1109/IPDPS.2011.18
DO - 10.1109/IPDPS.2011.18
M3 - Conference contribution
AN - SCOPUS:80053290844
SN - 9780769543857
T3 - Proceedings - 25th IEEE International Parallel and Distributed Processing Symposium, IPDPS 2011
SP - 80
EP - 91
BT - Proceedings - 25th IEEE International Parallel and Distributed Processing Symposium, IPDPS 2011
T2 - 25th IEEE International Parallel and Distributed Processing Symposium, IPDPS 2011
Y2 - 16 May 2011 through 20 May 2011
ER -