You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
No tuning over input and output trafos:
We might want to hardcode it in yahpo_train.
Checkpoint early stopping
Dump torch models for post-hoc sanity checks and comparison to ONNX.
Runtime: How can we ensure monotonicity? Idea 1: Train NNs for longer
Idea 2: Train on diffs (postponed)
epoch: Train on diffs and accumulate
fraction: Discretize, train on diffs and accumulate.
Idea 3: Add loss that penalizes f(x) - f(x*) < 0 where x* is a copy of x with increased budget.
This should be easy to implement, as we can compute v = df(x)i / dxj (target i and feature j) and add a penalty term -lambda * v to the loss.
Global
Clipping to [0,1]
No tuning over input and output trafos:
We might want to hardcode it in
yahpo_train
.Checkpoint early stopping
Dump torch models for post-hoc sanity checks and comparison to ONNX.
Runtime: How can we ensure monotonicity?
Idea 1: Train NNs for longer
Idea 2: Train on diffs (postponed)
epoch
: Train on diffs and accumulatefraction:
Discretize, train on diffs and accumulate.Idea 3: Add loss that penalizes f(x) - f(x*) < 0 where x* is a copy of x with increased budget.
This should be easy to implement, as we can compute
v = df(x)i / dxj
(target i and feature j) and add a penalty term-lambda * v
to the loss.rbv2
-> We should evaluate surrogate quality at 0.05 and see if we want to include it.
lcbench
iaml
fair
nb301
fcnet
Postponed
Weight samples by inverse number of obs. per task?
Either
w_i = 1
orw_i = sqrt(1/n_task)
.Revisit noisy surrogates: Revisit noisy #64
The text was updated successfully, but these errors were encountered: