Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits Under Realizability

David Simchi-Levi, Yunzong Xu

Research output: Contribution to journalArticlepeer-review

Abstract

We consider the general (stochastic) contextual bandit problem under the realizability assumption, that is, the expected reward, as a function of contexts and actions, belongs to a general function class F . We design a fast and simple algorithm that achieves the statistically optimal regret with only O(log T) calls to an offline regression oracle across all T rounds. The number of oracle calls can be further reduced to O(log log T) if T is known in advance. Our results provide the first universal and optimal reduction from contextual bandits to offline regression, solving an important open problem in the contextual bandit literature. A direct consequence of our results is that any advances in offline regression immediately translate to contextual bandits, statistically and computationally. This leads to faster algorithms and improved regret guarantees for broader classes of contextual bandit problems.

Original languageEnglish (US)
Pages (from-to)1904-1931
Number of pages28
JournalMathematics of Operations Research
Volume47
Issue number3
DOIs
StatePublished - Aug 2022
Externally publishedYes

Keywords

  • computational efficiency
  • contextual bandit
  • offline regression
  • online-to-offline reduction
  • statistical learning

ASJC Scopus subject areas

  • General Mathematics
  • Computer Science Applications
  • Management Science and Operations Research

Fingerprint

Dive into the research topics of 'Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits Under Realizability'. Together they form a unique fingerprint.

Cite this