Colloque de clôture

Le Colloque de clôture du projet SPADRO aura lieu

du 30 octobre au 3 novembre 2017

au Centre Lazaret, à Sète.

Y participeront:

Faster rates for policy learning

This article improves the existing proven rates of regret decay in optimal policy estimation. We give a margin-free result showing that the regret decay for estimating a within-class optimal policy is second-order for empirical risk minimizers over Donsker classes, with regret decaying at a faster rate than the standard error of an efficient estimator of the value of an optimal policy. We also give a result from the classification literature that shows that faster regret decay is possible via plug-in estimation provided a margin condition holds. Four examples are considered.

On the estimation of the mean of a random vector

We study the problem of estimating the mean of a multivariate distribution based on independent samples. The main result is the proof of existence of an estimator with a non-asymptotic sub-Gaussian performance for all distributions satisfying some mild moment assumptions.

A Minkowski Theorem for Quasicrystals

The aim of this paper is to generalize Minkowski’s theorem. This theorem is usually stated for a centrally symmetric convex body and a lattice both included in R^n. In some situations, one may replace the lattice by a more general set for which a notion of density exists. In this paper, we prove a Minkowski theorem for quasicrystals, which bounds from below the frequency of differences appearing in the quasicrystal and belonging to a centrally symmetric convex body.

Targeted sequential design for targeted learning inference of the optimal treatment rule and its mean reward

This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e., assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own.

Extension de SPADRO pour six mois supplémentaires

Chers collègues,

Nous avons une excellente nouvelle à partager avec vous en cette fin d'année 2016. L'Agence Nationale de la Recherche a donné un avis favorable au prolongement de six mois de SPADRO. La fin du projet est donc reportée au 31 décembre 2017, dans un peu plus d'un an. C'est une belle opportunité qui nous est donnée de pousser plus avant nos explorations!

En vous souhaitant de bonnes fêtes, certes un peu prématurément, amitiés,

Antoine et Aurélien

Refined Lower Bounds for Adversarial Bandits

We provide new lower bounds on the regret that must be suffered by adversarial bandit algorithms. The new results show that recent upper bounds that either (a) hold with high-probability or (b) depend on the total lossof the best arm or (c) depend on the quadratic variation of the losses, are close to tight. Besides this we prove two impossibility results. First, the existence of a single arm that is optimal in every round cannot improve the regret in the worst case. Second, the regret cannot scale with the effective range of the losses.

Conditional quantile sequential estimation for stochastic codes

This paper is devoted to the estimation of conditional quantile, more precisely the quantile of the output of a real stochastic code whose inputs are in R d. In this purpose, we introduce a stochastic algorithm based on Robbins-Monro algorithm and on k-nearest neighbors theory. We propose conditions on the code for that algorithm to be convergent and study the non-asymptotic rate of convergence of the means square error. Finally, we give optimal parameters of the algorithm to obtain the best rate of convergence.

On Explore-Then-Commit Strategies

We study the problem of minimising regret in two-armed bandit problems with Gaussian rewards. Our objective is to use this simple setting to illustrate that strategies based on an exploration phase (up to a stopping time) followed by exploitation are necessarily suboptimal. The results hold regardless of whether or not the difference in means between the two arms is known.

Explore First, Exploit Next: The True Shape of Regret in Bandit Problems

We revisit lower bounds on the regret in the case of multi-armed bandit problems. We obtain non-asymptotic, distribution-dependent bounds and provide straightforward proofs based only on well-known properties of Kullback-Leibler divergences. These bounds show in particular that in an initial phase the regret grows almost linearly, and that the well-known logarithmic growth of the regret only holds in a final phase. The proof techniques come to the essence of the information-theoretic arguments used and they are deprived of all unnecessary complications.

Pages

Souscrire à spadro.eu RSS