Adaptive KL-UCB Based Bandit Algorithms for Markovian and I.I.D. Settings

Arghyadip Roy, Sanjay Shakkottai, R. Srikant

Research output: Contribution to journalArticlepeer-review


—In the regret-based formulation of multi-armed Bandit (MAB) problems, except in rare instances, much of the literature focuses on arms with independent and identically distributed (i.i.d.) rewards. In this article, we consider the problem of obtaining regret guarantees for MAB problems, in which the rewards of each arm form a Markov chain that may not belong to a single parameter exponential family. To achieve a logarithmic regret in such problems is not difficult: a variation of standard Kullback–Leibler upper confidence bound (KL-UCB) does the job. However, the constants obtained from such an analysis are poor for the following reason: i.i.d. rewards are a special case of Markov rewards and it is difficult to design an algorithm that works well independent of whether the underlying model is truly Markovian or i.i.d. To overcome this issue, we introduce a novel algorithm that identifies whether the rewards from each arm are truly Markovian or i.i.d. using a total variation distance-based test. Our algorithm then switches from using a standard KL-UCB to a specialized version of KL-UCB when it determines that the arm reward is Markovian, thus resulting in low regrets for both i.i.d. and Markovian settings.

Original languageEnglish (US)
Pages (from-to)2637-2644
Number of pages8
JournalIEEE Transactions on Automatic Control
Issue number4
StatePublished - Apr 2023
Externally publishedYes


  • Kullback–Leibler upper confidence bound (KL-UCB)
  • multi-armed bandit (MAB)
  • online learning
  • regret
  • rested bandit

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Control and Systems Engineering
  • Computer Science Applications


Dive into the research topics of 'Adaptive KL-UCB Based Bandit Algorithms for Markovian and I.I.D. Settings'. Together they form a unique fingerprint.

Cite this