InterQ: A DQN Framework for Optimal Intermittent Control

Research output: Contribution to journalArticlepeer-review

Abstract

In this letter, we explore the communication-control co-design of discrete-time stochastic linear systems through reinforcement learning. Specifically, we examine a closed-loop system involving two sequential decision-makers: a scheduler and a controller. The scheduler continuously monitors the system’s state but transmits it to the controller intermittently to balance the communication cost and control performance. The controller, in turn, determines the control input based on the intermittently received information. Given the partially nested information structure, we show that the optimal control policy follows a certainty-equivalence form. Subsequently, we analyze the qualitative behavior of the scheduling policy. To determine the optimal scheduling policy, we propose InterQ, a deep reinforcement learning algorithm which uses a deep neural network to approximate the associated Q-function. Through extensive numerical evaluations, we analyze the scheduling landscape and further compare our approach against two baseline strategies: (a) a multi-period periodic scheduling policy, and (b) an event-triggered policy. The results demonstrate that our proposed method outperforms both baselines.

Original languageEnglish (US)
Pages (from-to)607-612
Number of pages6
JournalIEEE Control Systems Letters
Volume9
DOIs
StatePublished - 2025

Keywords

  • Intermittent control
  • deep Q-networks
  • deep reinforcement learning

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Control and Optimization

Fingerprint

Dive into the research topics of 'InterQ: A DQN Framework for Optimal Intermittent Control'. Together they form a unique fingerprint.

Cite this