New methods for Bayesian optimization of expensive functions
Bayesian optimization methods have been applied with great success to global optimization of expensive-to-evaluate functions in machine learning, engineering, healthcare, and other areas. While traditional approaches only query the expensive-to-evaluate objective, we also often have access to other information sources: when optimizing an aerodynamic design, we may assess its performance by wind tunnel studies, or by CFD simulations with varying mesh sizes; when optimizing an inventory management system, we may evaluate it in real life at the client’s warehouse, or by discrete-event simulations that vary in length and number of replications. These approximations are typically subject to an unknown bias in addition to common noise. This so-called model discrepancy results from an inherent inability to model the reality accurately, e.g., due to limited physical models. In this talk I will present a rigorous mathematical treatment of the uncertainties arising from model discrepancy and noisy observations that allows us to leverage these information sources effectively. I propose novel knowledge gradient algorithms for choosing which information source to query at each point in time. These algorithms rely on a value of information analysis and maximize the predicted information gain per unit cost. Experimental results demonstrate that utilizing additional information sources improves the performance significantly beyond what could be accomplished through traditional methods. Time-permitting, I will discuss recent work on solving sequences of related problems and on incorporating derivative information into Bayesian global optimization. Based on joint works with Peter I. Frazier, Jialei Wang, Andrew G. Wilson, and Jian Wu.
Multi-Information Source Optimization, accepted as spotlight presentation for NIPS 2017 Warm-Starting Bayesian Optimization, 2016 Winter Simulation Conference Bayesian Optimization with Gradients, accepted as oral presentation for NIPS 2017