TY - JOUR
T1 - Mean field analysis of neural networks
T2 - A law of large numbers
AU - Sirignano, Justin
AU - Spiliopoulos, Konstantinos
N1 - Publisher Copyright:
© 2020 Society for Industrial and Applied Mathematics.
PY - 2020
Y1 - 2020
N2 - Machine learning, and in particular neural network models, have revolutionized fields such as image, text, and speech recognition. Today, many important real-world applications in these areas are driven by neural networks. There are also growing applications in engineering, robotics, medicine, and finance. Despite their immense success in practice, there is limited mathematical understanding of neural networks. This paper illustrates how neural networks can be studied via stochastic analysis and develops approaches for addressing some of the technical challenges which arise. We analyze one-layer neural networks in the asymptotic regime of simultaneously (a) large network sizes and (b) large numbers of stochastic gradient descent training iterations. We rigorously prove that the empirical distribution of the neural network parameters converges to the solution of a nonlinear partial differential equation. This result can be considered a law of large numbers for neural networks. In addition, a consequence of our analysis is that the trained parameters of the neural network asymptotically become independent, a property which is commonly called "propagation of chaos.
AB - Machine learning, and in particular neural network models, have revolutionized fields such as image, text, and speech recognition. Today, many important real-world applications in these areas are driven by neural networks. There are also growing applications in engineering, robotics, medicine, and finance. Despite their immense success in practice, there is limited mathematical understanding of neural networks. This paper illustrates how neural networks can be studied via stochastic analysis and develops approaches for addressing some of the technical challenges which arise. We analyze one-layer neural networks in the asymptotic regime of simultaneously (a) large network sizes and (b) large numbers of stochastic gradient descent training iterations. We rigorously prove that the empirical distribution of the neural network parameters converges to the solution of a nonlinear partial differential equation. This result can be considered a law of large numbers for neural networks. In addition, a consequence of our analysis is that the trained parameters of the neural network asymptotically become independent, a property which is commonly called "propagation of chaos.
KW - Machine learning
KW - Stochastic analysis
KW - Weak convergence
UR - http://www.scopus.com/inward/record.url?scp=85084454058&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85084454058&partnerID=8YFLogxK
U2 - 10.1137/18M1192184
DO - 10.1137/18M1192184
M3 - Article
AN - SCOPUS:85084454058
SN - 0036-1399
VL - 80
SP - 725
EP - 752
JO - SIAM Journal on Applied Mathematics
JF - SIAM Journal on Applied Mathematics
IS - 2
ER -