Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks

Shankar A. Deka, Dusan M. Stipanovic, Claire J. Tomlin

Research output: Contribution to journalArticlepeer-review


Convolutional and recurrent neural networks (RNNs) have been widely used to achieve state-of-the-art performance on classification tasks. However, it has also been noted that these networks can be manipulated adversarially with relative ease, by carefully crafted additive perturbations to the input. Though several experimentally established prior works exist on crafting and defending against attacks, it is also desirable to have rigorous theoretical analyses to illuminate conditions under which such adversarial inputs exist. This article provides both the theory and supporting experiments for real-time attacks. The focus is specifically on recurrent architectures and inspiration is drawn from dynamical systems' theory to naturally cast this as a control problem, allowing dynamic computation of adversarial perturbations at each timestep of the input sequence, thus resembling a feedback controller. Illustrative examples are provided to supplement the theoretical discussions.

Original languageEnglish (US)
Pages (from-to)2615-2629
Number of pages15
JournalIEEE Transactions on Control Systems Technology
Issue number6
StatePublished - Nov 1 2022


  • Adversarial examples
  • control synthesis
  • dynamical systems
  • recurrent neural network (RNN)

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering


Dive into the research topics of 'Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks'. Together they form a unique fingerprint.

Cite this