TY - JOUR
T1 - Towards exascale for wind energy simulations
AU - Min, Misun
AU - Brazell, Michael
AU - Tomboulides, Ananias
AU - Churchfield, Matthew
AU - Fischer, Paul
AU - Sprague, Michael
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024/7
Y1 - 2024/7
N2 - We examine large-eddy-simulation modeling approaches and computational performance of two open-source computational fluid dynamics codes for the simulation of atmospheric boundary layer flows that are of direct relevance to wind energy production. The first code, NekRS, is a high-order, unstructured-grid, spectral element code. The second code, AMR-Wind, is a second-order, block-structured, finite-volume code with adaptive mesh refinement capabilities. The objective of this study is to co-develop these codes in order to improve model fidelity and performance for each. These features will be critical for running ABL-based applications such as wind farm analysis on advanced computing architectures. To this end, we investigate the performance of NekRS and AMR-Wind on the Oak Ridge Leadership Facility supercomputers Summit, using 4 to 800 nodes (24 to 4,800 NVIDIA V100 GPUs), and Crusher, the testbed for the Frontier exascale system, using 18 to 384 Graphics Compute Dies on AMD MI250X GPUs. We compare strong- and weak-scaling capabilities, linear solver performance, and time to solution. We also identify leading inhibitors to parallel scaling.
AB - We examine large-eddy-simulation modeling approaches and computational performance of two open-source computational fluid dynamics codes for the simulation of atmospheric boundary layer flows that are of direct relevance to wind energy production. The first code, NekRS, is a high-order, unstructured-grid, spectral element code. The second code, AMR-Wind, is a second-order, block-structured, finite-volume code with adaptive mesh refinement capabilities. The objective of this study is to co-develop these codes in order to improve model fidelity and performance for each. These features will be critical for running ABL-based applications such as wind farm analysis on advanced computing architectures. To this end, we investigate the performance of NekRS and AMR-Wind on the Oak Ridge Leadership Facility supercomputers Summit, using 4 to 800 nodes (24 to 4,800 NVIDIA V100 GPUs), and Crusher, the testbed for the Frontier exascale system, using 18 to 384 Graphics Compute Dies on AMD MI250X GPUs. We compare strong- and weak-scaling capabilities, linear solver performance, and time to solution. We also identify leading inhibitors to parallel scaling.
KW - Exascale
KW - large-eddy simulation
KW - scalability
UR - http://www.scopus.com/inward/record.url?scp=85193980509&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85193980509&partnerID=8YFLogxK
U2 - 10.1177/10943420241252511
DO - 10.1177/10943420241252511
M3 - Article
AN - SCOPUS:85193980509
SN - 1094-3420
VL - 38
SP - 337
EP - 355
JO - International Journal of High Performance Computing Applications
JF - International Journal of High Performance Computing Applications
IS - 4
ER -