Abstract
In this paper, we study multi-armed bandit problems in an explore-then-commit setting. In our proposed explore-then-commit setting, the goal is to identify the best arm after a pure experimentation (exploration) phase and exploit it once or for a given finite number of times. We identify that although the arm with the highest expected reward is the most desirable objective for infinite exploitations, it is not necessarily the one that is most probable to have the highest reward in a single or finite-time exploitations. Alternatively, we advocate the idea of risk-aversion where the objective is to compete against the arm with the best risk-return trade-off. We propose two algorithms whose objectives are to select the arm that is most probable to reward the most. Using a new notion of finite-time exploitation regret, we find an upper bound of order ln (1/(ϵ)) for the minimum number of experiments before commitment, to guarantee upper bound ϵ for regret. As compared to existing risk-averse bandit algorithms, our algorithms do not rely on hyper-parameters, resulting in a more robust behavior, which is verified by numerical evaluations.
Original language | English (US) |
---|---|
Article number | 9142286 |
Pages (from-to) | 8441-8446 |
Number of pages | 6 |
Journal | Proceedings of the IEEE Conference on Decision and Control |
Volume | 2019-January |
DOIs | |
State | Published - 2019 |
Event | 58th IEEE Conference on Decision and Control, CDC 2019 - Nice, France Duration: Dec 11 2019 → Dec 13 2019 |
ASJC Scopus subject areas
- Control and Systems Engineering
- Modeling and Simulation
- Control and Optimization