Modeling from Features: a Mean-field Framework for Over-parameterized Deep Neural Networks

Cong Fang, Jason D. Lee, Pengkun Yang, Tong Zhang

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper proposes a new mean-field framework for over-parameterized deep neural networks (DNNs), which can be used to analyze neural network training. In this framework, a DNN is represented by probability measures and functions over its features (that is, the function values of the hidden units over the training data) in the continuous limit, instead of the neural network parameters as most existing studies have done. This new representation overcomes the degenerate situation where all the hidden units essentially have only one meaningful hidden unit in each middle layer, leading to a simpler representation of DNNs. Moreover, we construct a non-linear dynamics called neural feature flow, which captures the evolution of an over-parameterized DNN trained by Gradient Descent. We illustrate the framework via the Residual Network (Res-Net) architecture. It is shown that when the neural feature flow process converges, it reaches a global minimal solution under suitable conditions.

Original languageEnglish (US)
Pages (from-to)1887-1936
Number of pages50
JournalProceedings of Machine Learning Research
Volume134
StatePublished - 2021
Externally publishedYes
Event34th Conference on Learning Theory, COLT 2021 - Boulder, United States
Duration: Aug 15 2021Aug 19 2021

Keywords

  • deep residual network
  • global minimum
  • mean-field theory
  • non-linear dynamics

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Modeling from Features: a Mean-field Framework for Over-parameterized Deep Neural Networks'. Together they form a unique fingerprint.

Cite this