You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! I am trying to compare some optimizers and wanted to replicate your optimizer evaluation protocol and make the ranks with the percentage of budget you show in your paper. When I try to run the same configuration given different seed in the objective function I get the same results independent of the scenario or instance. If this is supposed to happen, then why did you made 30 replications in your original work? Thank you!
Hi @glatq and sorry for the late reply, it seems like we missed this issue.
Indeed, surrogates in YAHPO Gym are deterministic, i.e., repeatedly evaluating the same configuration will give you the same output.
For a discussion of noisy surrogates, please see https://slds-lmu.github.io/yahpo_gym/frequently_asked.html#noisy-surrogates.
In summary noisy surrogates are currently not supported but eventually will be in a YAHPO Gym v2 (which, however, currently is somewhat on hold).
When running any black-box optimizer on a function (regardless of noisy or deterministic) you still should perform replications.
This is due to different black-box optimizers having stochastic behavior.
For example, a random search will propose different configurations uniformly at random depending on the seed.
Bayesian Optimization will use a different initial design and the surrogate model fit and acquisition function optimization depends on the random seed and therefore the output of running BO on a black-box function in general will not be deterministic.
The same holds for evolutionary algorithms due to parent selection and mutation and crossover operations etc.
Hi! I am trying to compare some optimizers and wanted to replicate your optimizer evaluation protocol and make the ranks with the percentage of budget you show in your paper. When I try to run the same configuration given different seed in the objective function I get the same results independent of the scenario or instance. If this is supposed to happen, then why did you made 30 replications in your original work? Thank you!
Here is the code that gives me the same results:
from yahpo_gym import * b = BenchmarkSet("lcbench", instance="167168") config = b.get_opt_space().sample_configuration().get_dictionary() res1 = b.objective_function(config, seed=1, logging=True) res2 = b.objective_function(config, seed=123, logging=True) print(res1) print(res2)
which returns
[{'time': 39.226597, 'val_accuracy': 66.65739, 'val_cross_entropy': 0.8803897, 'val_balanced_accuracy': 0.67849356, 'test_cross_entropy': 1.2539272, 'test_balanced_accuracy': 0.677087}]
[{'time': 39.226597, 'val_accuracy': 66.65739, 'val_cross_entropy': 0.8803897, 'val_balanced_accuracy': 0.67849356, 'test_cross_entropy': 1.2539272, 'test_balanced_accuracy': 0.677087}]
The text was updated successfully, but these errors were encountered: