Crafting adversarial inputs for attacks on neural networks and robustification against such attacks have continued to be a topic of keen interest in the machine learning community. Yet, the vast majority of work in current literature is only empirical in nature. We present a novel viewpoint on adversarial attacks on recurrent neural networks (RNNs) through the lens of dynamical systems theory. In particular, we show how control theory-based analysis tools can be leveraged to compute these adversarial input disturbances, and obtain bounds on how they impact the neural network performance. The disturbances are computed dynamically at each time-step by taking advantage of the recurrent architecture of RNNs, thus making them more efficient compared to prior work, as well as amenable to 'real-time' attacks. Finally, the theoretical results are supported by some illustrative examples.