Adaptive deep reinforcement learning for non-stationary environments

Jin Zhu, Yutong Wei, Yu Kang, Xiaofeng Jiang, Geir E. Dullerud

Research output: Contribution to journalArticlepeer-review

Abstract

Deep reinforcement learning (DRL) is currently used to solve Markov decision process problems for which the environment is typically assumed to be stationary. In this paper, we propose an adaptive DRL method for non-stationary environments. First, we introduce model uncertainty and propose the self-adjusting deep Q-learning algorithm, which can achieve the rebalance of exploration and exploitation automatically as the environment changes. Second, we propose a feasible criterion to judge the appropriateness of parameter setting of deep Q-networks and minimize the misjudgment probability based on the large deviation principle (LDP). The effectiveness of the proposed adaptive DRL method is illustrated in terms of an advanced persistent threat (APT) attack simulation game. Experimental results show that compared with the classic deep Q-learning algorithms in non-stationary and stationary environments, the adaptive DRL method improves performance by at least 14.28% and 30.56%, respectively.

Original languageEnglish (US)
Article number202204
JournalScience China Information Sciences
Volume65
Issue number10
DOIs
StatePublished - Oct 2022

Keywords

  • LDP
  • adaptive DRL
  • exploration and exploitation problem
  • model uncertainty
  • non-stationary environment
  • parameter setting

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Adaptive deep reinforcement learning for non-stationary environments'. Together they form a unique fingerprint.

Cite this