Abstract
In this article, a mathematical formulation for describing and designing activation functions in deep neural networks is provided. The methodology is based on a precise characterization of the desired activation functions that satisfy particular criteria, including circumventing vanishing or exploding gradients during training. The problem of finding desired activation functions is formulated as an infinite-dimensional optimization problem, which is later relaxed to solving a partial differential equation. Furthermore, bounds that guarantee the optimality of the designed activation function are provided. Relevant examples with some state-of-the-art activation functions are provided to illustrate the methodology.
Original language | English (US) |
---|---|
Pages (from-to) | 1-14 |
Number of pages | 14 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
DOIs | |
State | Accepted/In press - 2022 |
Keywords
- Artificial neural networks
- Behavioral sciences
- Computer architecture
- Deep learning
- deep learning
- feedforward neural networks
- Neural networks
- Optimization
- partial differential equations (PDEs)
- Search problems
- Training
ASJC Scopus subject areas
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence