Feedback-Control Based Adversarial Attacks on Recurrent Neural Networks

Shankar A. Deka, Dusan M. Stipanovic, Claire J. Tomlin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Crafting adversarial inputs for attacks on neural networks and robustification against such attacks have continued to be a topic of keen interest in the machine learning community. Yet, the vast majority of work in current literature is only empirical in nature. We present a novel viewpoint on adversarial attacks on recurrent neural networks (RNNs) through the lens of dynamical systems theory. In particular, we show how control theory-based analysis tools can be leveraged to compute these adversarial input disturbances, and obtain bounds on how they impact the neural network performance. The disturbances are computed dynamically at each time-step by taking advantage of the recurrent architecture of RNNs, thus making them more efficient compared to prior work, as well as amenable to 'real-time' attacks. Finally, the theoretical results are supported by some illustrative examples.

Original languageEnglish (US)
Title of host publication2020 59th IEEE Conference on Decision and Control, CDC 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4677-4682
Number of pages6
ISBN (Electronic)9781728174471
DOIs
StatePublished - Dec 14 2020
Event59th IEEE Conference on Decision and Control, CDC 2020 - Virtual, Jeju Island, Korea, Republic of
Duration: Dec 14 2020Dec 18 2020

Publication series

NameProceedings of the IEEE Conference on Decision and Control
Volume2020-December
ISSN (Print)0743-1546
ISSN (Electronic)2576-2370

Conference

Conference59th IEEE Conference on Decision and Control, CDC 2020
Country/TerritoryKorea, Republic of
CityVirtual, Jeju Island
Period12/14/2012/18/20

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization

Fingerprint

Dive into the research topics of 'Feedback-Control Based Adversarial Attacks on Recurrent Neural Networks'. Together they form a unique fingerprint.

Cite this