Gaussian processes and Bayesian optimization
In machine learning, most work is spent trying to deal with increasingly large piles of data: this includes both the theoretical design of models that can work well with large amounts of data, and the machinery that allows large amounts of data to flow through a model development process. Today, however, we will focus on models that seek to operate with as little data as possible---so-called sample efficient models. In particular, we will be discussing Gaussian processes and exploring some of their exciting theoretical properties and connections to various other topics in mathematics and statistics. After setting this foundation, we will push on to discuss their, potentially, most important application, Bayesian optimization. Bayesian optimization provides a sample-efficient strategy for identifying high-performing outcomes for a given function. At various points throughout this talk, we will also discuss how enforcing certain Physics-motivated beliefs can make these tools even more efficient.
Bio: Mike currently manages the SigOpt ML platform for efficiently identifying optimal modeling designs and configurations for customers from AI/ML, finance and materials science, among other fields. He has been with SigOpt for 7 years, since nearly its founding as a VC funded startup in San Francisco. He started within the research team before moving to management after SigOpt was acquired by Intel in late 2020. Prior to joining SigOpt, Mike was a postdoc at Argonne and CU-Denver, where he co-wrote a text on kernel approximation methods. He has a PhD in applied mathematics from Cornell University.