You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I would like to ask whether the checkpoint obtained from training on the training set uses the code parameters in the TableShift library. The description in the paper is as follows. If so, since the training epochs for the neural network model in TableShift is set to 1, could the model be under - trained? That is to say, if the model on the training set is fully trained, perhaps the model wouldn't experience a significant drop in performance on the test set? Or, after the model is fully trained on the training set, would the algorithm not perform as well when undergoing test - time calibration?
"Backbone Models. For all experiments, we use three representative deep tabular models: MLP, Tabtransformer (Huang et al.
2020) and FT-Transformer (Gorishniy et al. 2021) as the backbone model.
Training Phase. For training the source model, we follow the TableShift benchmark (Gardner, Popovic, and Schmidt 2023)
for all setting of training hyperparameters. Specifically, we train each backbone model with a batch size of 512 for several
epochs, depending on the model’s convergence as evaluated on the validation set. The AdamW optimizer is used with a learning
rate of 0.01 and a weight decay of 0.01."
The text was updated successfully, but these errors were encountered:
Let us clarify the points about hyperparameters you mentioned.
The hyperparameters we mentioned are structure-related. You can find the specific configurations in default_hparameter.py under _DEFAULT_CONFIGS at line 20.
For training, we used multiple epochs to ensure training convergence, which was determined using the in-distribution validation set in TableShift. We refer to this as "several epochs" in our paper.
Please feel free to let us know if you have any other questions!
Hello, I would like to ask whether the checkpoint obtained from training on the training set uses the code parameters in the TableShift library. The description in the paper is as follows. If so, since the training epochs for the neural network model in TableShift is set to 1, could the model be under - trained? That is to say, if the model on the training set is fully trained, perhaps the model wouldn't experience a significant drop in performance on the test set? Or, after the model is fully trained on the training set, would the algorithm not perform as well when undergoing test - time calibration?
"Backbone Models. For all experiments, we use three representative deep tabular models: MLP, Tabtransformer (Huang et al.
2020) and FT-Transformer (Gorishniy et al. 2021) as the backbone model.
Training Phase. For training the source model, we follow the TableShift benchmark (Gardner, Popovic, and Schmidt 2023)
for all setting of training hyperparameters. Specifically, we train each backbone model with a batch size of 512 for several
epochs, depending on the model’s convergence as evaluated on the validation set. The AdamW optimizer is used with a learning
rate of 0.01 and a weight decay of 0.01."
The text was updated successfully, but these errors were encountered: