TY - JOUR

T1 - Enforcing constraints for interpolation and extrapolation in Generative Adversarial Networks

AU - Stinis, P.

AU - Hagge, Tobias

AU - Tartakovsky, Alexandre M.

AU - Yeung, E.

N1 - Funding Information:
The authors would like to thank C. Corley and N. Hodas for very useful discussions and comments. The material presented here is based upon work supported by the Pacific Northwest National Laboratory (PNNL) ?Deep Learning for Scientific Discovery Agile Investment?. PNNL is operated by Battelle for the DOE under Contract DE-AC05-76RL01830.
Funding Information:
The authors would like to thank C. Corley and N. Hodas for very useful discussions and comments. The material presented here is based upon work supported by the Pacific Northwest National Laboratory (PNNL) “Deep Learning for Scientific Discovery Agile Investment”. PNNL is operated by Battelle for the DOE under Contract DE-AC05-76RL01830 .
Publisher Copyright:
© 2019 Elsevier Inc.

PY - 2019/11/15

Y1 - 2019/11/15

N2 - Generative Adversarial Networks (GANs) are becoming popular machine learning choices for training generators. At the same time there is a concerted effort in the machine learning community to expand the range of tasks in which learning can be applied as well as to utilize methods from other disciplines to accelerate learning. With this in mind, in the current work we suggest ways to enforce given constraints in the output of a GAN generator both for interpolation and extrapolation (prediction). For the case of dynamical systems, given a time series, we wish to train GAN generators that can be used to predict trajectories starting from a given initial condition. In this setting, the constraints can be in algebraic and/or differential form. Even though we are predominantly interested in the case of extrapolation, we will see that the tasks of interpolation and extrapolation are related. However, they need to be treated differently. For the case of interpolation, the incorporation of constraints is built into the training of the GAN. The incorporation of the constraints respects the primary game-theoretic setup of a GAN so it can be combined with existing algorithms. However, it can exacerbate the problem of instability during training that is well-known for GANs. We suggest adding small noise to the constraints as a simple remedy that has performed well in our numerical experiments. The case of extrapolation (prediction) is more involved. During training, the GAN generator learns to interpolate a noisy version of the data and we enforce the constraints. This approach has connections with model reduction that we can utilize to improve the efficiency and accuracy of the training. Depending on the form of the constraints, we may enforce them also during prediction through a projection step. We provide examples of linear and nonlinear systems of differential equations to illustrate the various constructions.

AB - Generative Adversarial Networks (GANs) are becoming popular machine learning choices for training generators. At the same time there is a concerted effort in the machine learning community to expand the range of tasks in which learning can be applied as well as to utilize methods from other disciplines to accelerate learning. With this in mind, in the current work we suggest ways to enforce given constraints in the output of a GAN generator both for interpolation and extrapolation (prediction). For the case of dynamical systems, given a time series, we wish to train GAN generators that can be used to predict trajectories starting from a given initial condition. In this setting, the constraints can be in algebraic and/or differential form. Even though we are predominantly interested in the case of extrapolation, we will see that the tasks of interpolation and extrapolation are related. However, they need to be treated differently. For the case of interpolation, the incorporation of constraints is built into the training of the GAN. The incorporation of the constraints respects the primary game-theoretic setup of a GAN so it can be combined with existing algorithms. However, it can exacerbate the problem of instability during training that is well-known for GANs. We suggest adding small noise to the constraints as a simple remedy that has performed well in our numerical experiments. The case of extrapolation (prediction) is more involved. During training, the GAN generator learns to interpolate a noisy version of the data and we enforce the constraints. This approach has connections with model reduction that we can utilize to improve the efficiency and accuracy of the training. Depending on the form of the constraints, we may enforce them also during prediction through a projection step. We provide examples of linear and nonlinear systems of differential equations to illustrate the various constructions.

KW - Dynamical systems

KW - Extrapolation

KW - Generative Adversarial Networks

KW - Machine learning

KW - Prediction

UR - http://www.scopus.com/inward/record.url?scp=85077736867&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85077736867&partnerID=8YFLogxK

U2 - 10.1016/j.jcp.2019.07.042

DO - 10.1016/j.jcp.2019.07.042

M3 - Article

AN - SCOPUS:85077736867

VL - 397

JO - Journal of Computational Physics

JF - Journal of Computational Physics

SN - 0021-9991

M1 - 108844

ER -