A Formal Characterization of Activation Functions in Deep Neural Networks

Massi Amrouche, Dusan M. Stipanovic

Research output: Contribution to journalArticlepeer-review


In this article, a mathematical formulation for describing and designing activation functions in deep neural networks is provided. The methodology is based on a precise characterization of the desired activation functions that satisfy particular criteria, including circumventing vanishing or exploding gradients during training. The problem of finding desired activation functions is formulated as an infinite-dimensional optimization problem, which is later relaxed to solving a partial differential equation. Furthermore, bounds that guarantee the optimality of the designed activation function are provided. Relevant examples with some state-of-the-art activation functions are provided to illustrate the methodology.

Original languageEnglish (US)
Pages (from-to)1-14
Number of pages14
JournalIEEE Transactions on Neural Networks and Learning Systems
StateAccepted/In press - 2022


  • Artificial neural networks
  • Behavioral sciences
  • Computer architecture
  • Deep learning
  • deep learning
  • feedforward neural networks
  • Neural networks
  • Optimization
  • partial differential equations (PDEs)
  • Search problems
  • Training

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence


Dive into the research topics of 'A Formal Characterization of Activation Functions in Deep Neural Networks'. Together they form a unique fingerprint.

Cite this