Graph-based Bayesian learning: continuum limits and algorithms
The principled learning of functions from data is at the core of statistics, machine learning and artificial intelligence. The aim of this talk is to present some new theoretical and methodological developments concerning the graph-based, Bayesian approach to semi-supervised learning. I will show suitable scalings of graph parameters that provably lead to robust Bayesian solutions in the limit of large number of unlabeled data. The analysis relies on a careful choice of topology and in the study of the the spectrum of graph Laplacians. Besides guaranteeing the consistency of graph-based methods, our theory explains the robustness of discretized function space MCMC methods in semi-supervised learning settings. This is joint work with Nicolas Garcia Trillos, Zachary Kaplan, and Thabo Samakhoana.