Can't properly format my model for codalab #44
-
|
Hello, I have trouble formatting my model correctly for codalab and need help. I used a pretrained model from torchvision.models For a codalab submission the model needs to be wrapped in a nn.Module. I thought I could do it like this: class Model(nn.Module): but now my if I load my file like this: from model import Model I cannot run model.load_state_dict(torch.load(PATH)) I can only do it like this (e.g.): from model import Model my model is an object within Model() and not the model itself. Is there an easy way to fix it? Or do I have to look up the complete code to implement it? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
I think the easiest solution is to use your second approach locally: instantiate your wrapper class, load the weights into the inner model, and then save the whole wrapped model with the weights loaded. You can then submit this version for Codalab. That way, the model is properly wrapped in an nn.Module, and you won’t need to reassign or extract anything during evaluation. |
Beta Was this translation helpful? Give feedback.
I think the easiest solution is to use your second approach locally: instantiate your wrapper class, load the weights into the inner model, and then save the whole wrapped model with the weights loaded. You can then submit this version for Codalab. That way, the model is properly wrapped in an nn.Module, and you won’t need to reassign or extract anything during evaluation.