When
Where
Speaker: Kwang-Sung Jun, Computer Science, Univ of Arizona
Title: Confidence sequences via online learning
Abstract: Confidence sequence provides ways to characterize uncertainty in stochastic environments, which is a widely-used tool for interactive machine learning algorithms and statistical problems including A/B testing, Bayesian optimization, reinforcement learning, and offline evaluation/learning. In these problems, constructing confidence sequences that are tight without losing correctness is crucial since it has a dramatic impact on the performance of downstream tasks. In this talk, I will present how to leverage results from online learning to derive confidence sequences that are provably and numerically tight. First, I will present an implicitly-defined confidence sequence for bounded random variables, which induces an empirical Bernstein-style confidence bound (thus adapts to the variance) and is provably better than the KL divergence-based confidence bound simultaneously, unlike the standard empirical Bernstein bound. Our confidence bound is never vacuous, can be efficiently computed, and provides state-of-the-art numerical performance. Second, I will turn to generalized linear models and how leveraging online learning helps develop (i) improved confidence sets for logistic linear models and (ii) noise-adaptive confidence sets for linear models with sub-Gaussian and bounded noise respectively. These lead to improved regret bounds in (generalized) linear bandit problems. I will conclude with open problems in confidence sequences and the role that online learning plays for them.