Maximin Action Identification: A New Bandit Framework for Games

Edition Number: 
29
Date: 
June, 2016
Place: 
New York, USA
PageStart: 
1 028
PageEnd: 
1 050
Abstract: 

We study an original problem of pure exploration in a strategic bandit model motivated by Monte Carlo Tree Search. It consists in identifying the best action in a game, when the player may sample random outcomes of sequentially chosen pairs of actions. We propose two strategies for the fixed-confidence setting: Maximin-LUCB, based on lower-and upper-confidence bounds; and Maximin-Racing, which operates by successively eliminating the sub-optimal actions. We discuss the sample complexity of both methods and compare their performance empirically. We sketch a lower bound analysis, and possible connections to an optimal algorithm.
Emilie's presentation at COLT on youtube

Arxiv Number: 
1602.04676
Hal Number: 
01273842