Abstract
—In the regret-based formulation of multi-armed Bandit (MAB) problems, except in rare instances, much of the literature focuses on arms with independent and identically distributed (i.i.d.) rewards. In this article, we consider the problem of obtaining regret guarantees for MAB problems, in which the rewards of each arm form a Markov chain that may not belong to a single parameter exponential family. To achieve a logarithmic regret in such problems is not difficult: a variation of standard Kullback–Leibler upper confidence bound (KL-UCB) does the job. However, the constants obtained from such an analysis are poor for the following reason: i.i.d. rewards are a special case of Markov rewards and it is difficult to design an algorithm that works well independent of whether the underlying model is truly Markovian or i.i.d. To overcome this issue, we introduce a novel algorithm that identifies whether the rewards from each arm are truly Markovian or i.i.d. using a total variation distance-based test. Our algorithm then switches from using a standard KL-UCB to a specialized version of KL-UCB when it determines that the arm reward is Markovian, thus resulting in low regrets for both i.i.d. and Markovian settings.
Original language | English (US) |
---|---|
Pages (from-to) | 2637-2644 |
Number of pages | 8 |
Journal | IEEE Transactions on Automatic Control |
Volume | 69 |
Issue number | 4 |
DOIs | |
State | Published - Apr 2023 |
Externally published | Yes |
Keywords
- Kullback–Leibler upper confidence bound (KL-UCB)
- multi-armed bandit (MAB)
- online learning
- regret
- rested bandit
ASJC Scopus subject areas
- Electrical and Electronic Engineering
- Control and Systems Engineering
- Computer Science Applications