Stability and Performance Analysis of Discrete-Time ReLU Recurrent Neural Networks

Sahel Vahedi Noori, Bin Hu, Geir Dullerud, Peter Seiler

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents sufficient conditions for the stability and ℓ2-gain performance of recurrent neural networks (RNNs) with ReLU activation functions. These conditions are derived by combining Lyapunov/dissipativity theory with Quadratic Constraints (QCs) satisfied by repeated ReLUs. We write a general class of QCs for repeated ReLUs using known properties for the scalar ReLU. Our stability and performance condition uses these QCs along with a 'lifted' representation for the ReLU RNN. We show that the positive homogeneity property satisfied by a scalar ReLU does not expand the class of QCs for the repeated ReLU. We present examples to demonstrate the stability / performance condition and study the effect of the lifting horizon.

Original languageEnglish (US)
Title of host publication2024 IEEE 63rd Conference on Decision and Control, CDC 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages8626-8632
Number of pages7
ISBN (Electronic)9798350316339
DOIs
StatePublished - 2024
Event63rd IEEE Conference on Decision and Control, CDC 2024 - Milan, Italy
Duration: Dec 16 2024Dec 19 2024

Publication series

NameProceedings of the IEEE Conference on Decision and Control
ISSN (Print)0743-1546
ISSN (Electronic)2576-2370

Conference

Conference63rd IEEE Conference on Decision and Control, CDC 2024
Country/TerritoryItaly
CityMilan
Period12/16/2412/19/24

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization

Fingerprint

Dive into the research topics of 'Stability and Performance Analysis of Discrete-Time ReLU Recurrent Neural Networks'. Together they form a unique fingerprint.

Cite this