TY - JOUR
T1 - DPM
T2 - A deep learning PDE augmentation method with application to large-eddy simulation
AU - Sirignano, Justin
AU - MacArt, Jonathan F.
AU - Freund, Jonathan B.
N1 - Funding Information:
This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration , under Award Number DE-NA0002374 . This research is part of the Blue Waters sustained-petascale computing project, which is supported by The National Science Foundation (awards OCI-0725070 and ACI-1238993 ) and the State of Illinois . Blue Waters is a joint effort of the University of Illinois at Urbana–Champaign and its National Center for Supercomputing Applications.
Publisher Copyright:
© 2020 Elsevier Inc.
PY - 2020/12/15
Y1 - 2020/12/15
N2 - A framework is introduced that leverages known physics to reduce overfitting in machine learning for scientific applications. The partial differential equation (PDE) that expresses the physics is augmented with a neural network that uses available data to learn a description of the corresponding unknown or unrepresented physics. Training within this combined system corrects for missing, unknown, or erroneously represented physics, including discretization errors associated with the PDE's numerical solution. For optimization of the network within the PDE, an adjoint PDE is solved to provide high-dimensional gradients, and a stochastic adjoint method (SAM) further accelerates training. The approach is demonstrated for large-eddy simulation (LES) of turbulence. High-fidelity direct numerical simulations (DNS) of decaying isotropic turbulence provide the training data used to learn sub-filter-scale closures for the filtered Navier–Stokes equations. Out-of-sample comparisons show that the deep learning PDE method outperforms widely-used models, even for filter sizes so large that they become qualitatively incorrect. It also significantly outperforms the same neural network when a priori trained based on simple data mismatch, not accounting for the full PDE. Measures of discretization errors, which are well-known to be consequential in LES, point to the importance of the unified training formulation's design, which without modification corrects for them. For comparable accuracy, simulation runtime is significantly reduced. A relaxation of the typical discrete enforcement of the divergence-free constraint in the solver is also successful, instead allowing the DPM to approximately enforce incompressibility physics. Since the training loss function is not restricted to correspond directly to the closure to be learned, training can incorporate diverse data, including experimental data.
AB - A framework is introduced that leverages known physics to reduce overfitting in machine learning for scientific applications. The partial differential equation (PDE) that expresses the physics is augmented with a neural network that uses available data to learn a description of the corresponding unknown or unrepresented physics. Training within this combined system corrects for missing, unknown, or erroneously represented physics, including discretization errors associated with the PDE's numerical solution. For optimization of the network within the PDE, an adjoint PDE is solved to provide high-dimensional gradients, and a stochastic adjoint method (SAM) further accelerates training. The approach is demonstrated for large-eddy simulation (LES) of turbulence. High-fidelity direct numerical simulations (DNS) of decaying isotropic turbulence provide the training data used to learn sub-filter-scale closures for the filtered Navier–Stokes equations. Out-of-sample comparisons show that the deep learning PDE method outperforms widely-used models, even for filter sizes so large that they become qualitatively incorrect. It also significantly outperforms the same neural network when a priori trained based on simple data mismatch, not accounting for the full PDE. Measures of discretization errors, which are well-known to be consequential in LES, point to the importance of the unified training formulation's design, which without modification corrects for them. For comparable accuracy, simulation runtime is significantly reduced. A relaxation of the typical discrete enforcement of the divergence-free constraint in the solver is also successful, instead allowing the DPM to approximately enforce incompressibility physics. Since the training loss function is not restricted to correspond directly to the closure to be learned, training can incorporate diverse data, including experimental data.
KW - Deep learning
KW - Large-eddy simulation
KW - Scientific machine learning
KW - Sub-grid-scale modeling
KW - Turbulence simulation
UR - http://www.scopus.com/inward/record.url?scp=85091919343&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85091919343&partnerID=8YFLogxK
U2 - 10.1016/j.jcp.2020.109811
DO - 10.1016/j.jcp.2020.109811
M3 - Article
AN - SCOPUS:85091919343
SN - 0021-9991
VL - 423
JO - Journal of Computational Physics
JF - Journal of Computational Physics
M1 - 109811
ER -