Abstract
It is well known that continuous-time recurrent neural nets are universal approximators for continuous-time dynamical systems. However, existing results provide approximation guarantees only for finite-time trajectories. In this work, we show that infinite-time trajectories generated by dynamical systems that are stable in a certain sense can be reproduced arbitrarily accurately by recurrent neural nets. For a subclass of these stable systems, we provide quantitative estimates on the sufficient number of neurons needed to achieve a specified error tolerance.
Original language | English (US) |
---|---|
Pages (from-to) | 384-392 |
Number of pages | 9 |
Journal | Proceedings of Machine Learning Research |
Volume | 120 |
State | Published - 2020 |
Event | 2nd Annual Conference on Learning for Dynamics and Control, L4DC 2020 - Berkeley, United States Duration: Jun 10 2020 → Jun 11 2020 |
Keywords
- Dynamical systems
- continuous time
- feedback
- recurrent neural nets
- simulation
- stability
- universal approximation
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability