Adaptive multi-fidelity optimization with fast learning rates
2026-04-17 • Machine Learning
Machine Learning
AI summaryⓘ
The authors study how to best use different versions of a function that vary in cost and accuracy to find the function's minimum when resources are limited. They first show theoretical limits on how well any method can do based on the tradeoff between cost and bias in these approximations. Then, they propose a new method called Kometo that matches these limits up to small extra factors and does not require knowing details about the function or approximations. Finally, they test their method and find it works better than previous approaches without needing problem-specific information.
multi-fidelity optimizationsimple regretcost-to-bias tradeofffunction smoothnessbiased approximationKometo algorithmoptimization budgettheoretical lower boundslogarithmic factors
Authors
Come Fiegel, Victor Gabillon, Michal Valko
Abstract
In multi-fidelity optimization, biased approximations of varying costs of the target function are available. This paper studies the problem of optimizing a locally smooth function with a limited budget, where the learner has to make a tradeoff between the cost and the bias of these approximations. We first prove lower bounds for the simple regret under different assumptions on the fidelities, based on a cost-to-bias function. We then present the Kometo algorithm which achieves, with additional logarithmic factors, the same rates without any knowledge of the function smoothness and fidelity assumptions, and improves previously proven guarantees. We finally empirically show that our algorithm outperforms previous multi-fidelity optimization methods without the knowledge of problem-dependent parameters.