TY - GEN
T1 - Highly-Ccalable, physics-informed GANs for learning solutions of stochastic PDEs
AU - Yang, Liu
AU - Treichler, Sean
AU - Kurth, Thorsten
AU - Fischer, Keno
AU - Barajas-Solano, David
AU - Romero, Josh
AU - Churavy, Valentin
AU - Tartakovsky, Alexandre
AU - Houston, Michael
AU - Prabhat,
AU - Karniadakis, George
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - Uncertainty quantification for forward and inverse problems is a central challenge across physical and biomedical disciplines. We address this challenge for the problem of modeling subsurface flow at the Hanford Site by combining stochastic computational models with observational data using physics-informed GAN models. The geographic extent, spatial heterogeneity, and multiple correlation length scales of the Hanford Site require training a computationally intensive GAN model to thousands of dimensions. We develop a highly optimized implementation that scales to 27,500 NVIDIA Volta GPUs. We develop a hierarchical scheme based on a multi-player game-theoretic approach for exploiting domain parallelism, map discriminators and generators to multiple GPUs, and employ efficient communication schemes to ensure training stability and convergence. Our implementation scales to 4584 nodes on the Summit supercomputer with a 93.1%scaling efficiency, achieving peak and sustained half-precision. rates of 1228 PF/s and 1207 PF/s.
AB - Uncertainty quantification for forward and inverse problems is a central challenge across physical and biomedical disciplines. We address this challenge for the problem of modeling subsurface flow at the Hanford Site by combining stochastic computational models with observational data using physics-informed GAN models. The geographic extent, spatial heterogeneity, and multiple correlation length scales of the Hanford Site require training a computationally intensive GAN model to thousands of dimensions. We develop a highly optimized implementation that scales to 27,500 NVIDIA Volta GPUs. We develop a hierarchical scheme based on a multi-player game-theoretic approach for exploiting domain parallelism, map discriminators and generators to multiple GPUs, and employ efficient communication schemes to ensure training stability and convergence. Our implementation scales to 4584 nodes on the Summit supercomputer with a 93.1%scaling efficiency, achieving peak and sustained half-precision. rates of 1228 PF/s and 1207 PF/s.
KW - Deep Learning
KW - GANs
KW - Stochastic PDEs
UR - http://www.scopus.com/inward/record.url?scp=85078152879&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85078152879&partnerID=8YFLogxK
U2 - 10.1109/DLS49591.2019.00006
DO - 10.1109/DLS49591.2019.00006
M3 - Conference contribution
AN - SCOPUS:85078152879
T3 - Proceedings of DLS 2019: Deep Learning on Supercomputers - Held in conjunction with SC 2019: The International Conference for High Performance Computing, Networking, Storage and Analysis
SP - 1
EP - 11
BT - Proceedings of DLS 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 3rd IEEE/ACM Workshop on Deep Learning on Supercomputers, DLS 2019
Y2 - 17 November 2019
ER -