This video is #8 in the Adaptive Experimentation series presented at the 18th IEEE Conference on eScience in Salt Lake City, UT (October 10-14, 2022). In this video, Sterling Baird @sterling-baird presents on continuous multifidelity optimization. Continuous multi-fidelity optimization is a method for optimizing a complex system by using multiple models of varying fidelity to represent the system. The goal is to find the optimal set of input parameters for the system by balancing the trade-off between the accuracy and computational cost of the models. One advantage of continuous multi-fidelity optimization is that it can significantly reduce the computational cost of the optimization process by using lower-fidelity models in regions of the input space where the differences in the outputs of the models are small. This allows the optimization process to explore a larger portion of the input space in a shorter amount of time. In the next video of the series, we will cover continuous multi-fidelity optimization.
Github link to jupyter notebook [ Ссылка ]
previous video in series: [ Ссылка ]
next video in series: [ Ссылка ]
0:00 intro and demo details
2:10 parameters, Service API, knowledge gradient model
4:06 BoTorch with knowledge gradient
5:23 problem setup
8:04 helper function for acquisition function
12:49 Bayesian optimization and final recommendation
15:30 final results, comparison to expected improvement
Ещё видео!