On the Complexity of Best Arm Identification in Multi-Armed Bandit Models

Date: 
January, 2016
PageStart: 
1
PageEnd: 
42
Arxiv Number: 
1407.4443
Hal Number: 
01024894
Abstract: 
The stochastic multi-armed bandit model is a simple abstraction that has proven useful in many different contexts in statistics and machine learning. Whereas the achievable limit in terms of regret minimization is now well known, our aim is to contribute to a better understanding of the performance in terms of identifying the m-best arms. We introduce generic notions of complexity for the two dominant frameworks considered in the literature: fixed-budget and fixed-con dence settings. In the fi xed-confi dence setting, we provide the fi rst known distribution-dependent lower bound on the complexity that involves information-theoretic quantities and holds when m>=1 under general assumptions. In the specifi c case of two armed-bandits, we derive re ned lower bounds in both the fi xed-confi dence and fi xed-budget settings, along with matching algorithms for Gaussian and Bernoulli bandit models. These results show in particular that the complexity of the fi xed-budget setting may be smaller than the complexity of the fi xed-con dence setting, contradicting the familiar behavior observed when testing fully speci ed alternatives. In addition, we also provide improved sequential stopping rules that have guaranteed error probabilities and shorter average running times. The proofs rely on two technical results that are of independent interest : a deviation lemma for self-normalized sums (Lemma 19) and a novel change of measure inequality for bandit models (Lemma 1).
Issue: 
1
Volume: 
17