Abstract
In this paper, we consider the problem of multi-armed bandits with a large number of correlated arms. We assume that the arms have Bernoulli distributed rewards, independent across time, where the probabilities of success are parametrized by known attribute vectors for each arm, as well as an unknown preference vector, each of dimension n. For this model, we seek an algorithm with a total regret that is sub-linear in time and independent of the number of arms. We present such an algorithm, which we call the Three-phase Algorithm, and analyze its performance. We show an upper bound on the total regret which applies uniformly in time. The asymptotics of this bound show that for any f ∈ ω(log(T)), the total regret can be made to be O(n·f(T)), independent of the number of arms.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the 2011 American Control Conference, ACC 2011 |
Pages | 119-124 |
Number of pages | 6 |
State | Published - 2011 |
Event | 2011 American Control Conference, ACC 2011 - San Francisco, CA, United States Duration: Jun 29 2011 → Jul 1 2011 |
Other
Other | 2011 American Control Conference, ACC 2011 |
---|---|
Country/Territory | United States |
City | San Francisco, CA |
Period | 6/29/11 → 7/1/11 |
ASJC Scopus subject areas
- Electrical and Electronic Engineering