Abstract
We consider the following learning problem: Given sample pairs of input and output signals generated by an unknown nonlinear system (which is not assumed to be causal or time-invariant), we wish to find a continuous-time recurrent neural net with hyperbolic tangent activation function that approximately reproduces the underlying i/o behavior with high confidence. Leveraging earlier work concerned with matching output derivatives up to a given finite order (Sontag, 1998), we reformulate the learning problem in familiar system-theoretic language and derive quantitative guarantees on the sup-norm risk of the learned model in terms of the number of neurons, the sample size, the number of derivatives being matched, and the regularity properties of the inputs, the outputs, and the unknown i/o map.
Original language | English (US) |
---|---|
Pages (from-to) | 425-435 |
Number of pages | 11 |
Journal | Proceedings of Machine Learning Research |
Volume | 144 |
State | Published - 2021 |
Event | 3rd Annual Conference on Learning for Dynamics and Control, L4DC 2021 - Virtual, Online, Switzerland Duration: Jun 7 2021 → Jun 8 2021 |
Keywords
- continuous time
- dynamical systems
- empirical risk minimization
- generalization bounds
- recurrent neural nets
- statistical learning theory
- system identification
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability