Optimization for Deep Learning: An Overview

Research output: Contribution to journalArticlepeer-review

Abstract

Optimization is a critical component in deep learning. We think optimization for neural networks is an interesting topic for theoretical research due to various reasons. First, its tractability despite non-convexity is an intriguing question and may greatly expand our understanding of tractable problems. Second, classical optimization theory is far from enough to explain many phenomena. Therefore, we would like to understand the challenges and opportunities from a theoretical perspective and review the existing research in this field. First, we discuss the issue of gradient explosion/vanishing and the more general issue of undesirable spectrum and then discuss practical solutions including careful initialization, normalization methods and skip connections. Second, we review generic optimization methods used in training neural networks, such as stochastic gradient descent and adaptive gradient methods, and existing theoretical results. Third, we review existing research on the global issues of neural network training, including results on global landscape, mode connectivity, lottery ticket hypothesis and neural tangent kernel.

Original languageEnglish (US)
Pages (from-to)249-294
Number of pages46
JournalJournal of the Operations Research Society of China
Volume8
Issue number2
DOIs
StatePublished - Jun 1 2020

Keywords

  • Convergence
  • Deep learning
  • Landscape
  • Neural networks
  • Non-convex optimization

ASJC Scopus subject areas

  • General Mathematics
  • Management Science and Operations Research

Fingerprint

Dive into the research topics of 'Optimization for Deep Learning: An Overview'. Together they form a unique fingerprint.

Cite this