Skip to content

Tensor conflict #262

@lliligabriel

Description

@lliligabriel

Greetings!

I am following along the tutorial for "build your own model for RT prediction."

I am at the step rt_model.predict(irt_pep)
and reports the error below. It seems to be a gpu vs. cpu conflict.

Is there a succinct way to resolve this?

Thanks!

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[10], [line 1](vscode-notebook-cell:?execution_count=10&line=1)
----> [1](vscode-notebook-cell:?execution_count=10&line=1) rt_model.predict(irt_pep)

File ~/miniforge3/envs/peptdeep/lib/python3.12/site-packages/peptdeep/model/model_interface.py:552, in ModelInterface.predict(self, precursor_df, batch_size, verbose, **kwargs)
    549 features = self._get_features_from_batch_df(batch_df, **kwargs)
    551 if isinstance(features, tuple):
--> [552](https://vscode-remote+ssh-002dremote-002b172-002e28-002e40-002e2.vscode-resource.vscode-cdn.net/new_home/xinyuegu/playground/~/miniforge3/envs/peptdeep/lib/python3.12/site-packages/peptdeep/model/model_interface.py:552)     predicts = self._predict_one_batch(*features)
    553 else:
    554     predicts = self._predict_one_batch(features)

File ~/miniforge3/envs/peptdeep/lib/python3.12/site-packages/peptdeep/model/model_interface.py:835, in ModelInterface._predict_one_batch(self, *features)
    833 def _predict_one_batch(self, *features):
    834     """Predicting for a mini batch"""
--> [835](https://vscode-remote+ssh-002dremote-002b172-002e28-002e40-002e2.vscode-resource.vscode-cdn.net/new_home/xinyuegu/playground/~/miniforge3/envs/peptdeep/lib/python3.12/site-packages/peptdeep/model/model_interface.py:835)     return self.model(*features).cpu().detach().numpy()

File ~/miniforge3/envs/peptdeep/lib/python3.12/site-packages/torch/nn/modules/module.py:1773, in Module._wrapped_call_impl(self, *args, **kwargs)
   1771     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1772 else:
-> [1773](https://vscode-remote+ssh-002dremote-002b172-002e28-002e40-002e2.vscode-resource.vscode-cdn.net/new_home/xinyuegu/playground/~/miniforge3/envs/peptdeep/lib/python3.12/site-packages/torch/nn/modules/module.py:1773)     return self._call_impl(*args, **kwargs)

File ~/miniforge3/envs/peptdeep/lib/python3.12/site-packages/torch/nn/modules/module.py:1784, in Module._call_impl(self, *args, **kwargs)
   1779 # If we don't have any hooks, we want to skip the rest of the logic in
   1780 # this function, and just call forward.
...
File ~/miniforge3/envs/peptdeep/lib/python3.12/site-packages/torch/nn/modules/linear.py:125, in Linear.forward(self, input)
    124 def forward(self, input: Tensor) -> Tensor:
--> [125](https://vscode-remote+ssh-002dremote-002b172-002e28-002e40-002e2.vscode-resource.vscode-cdn.net/new_home/xinyuegu/playground/~/miniforge3/envs/peptdeep/lib/python3.12/site-packages/torch/nn/modules/linear.py:125)     return F.linear(input, self.weight, self.bias)

RuntimeError: Expected all tensors to be on the same device, but got mat2 is on cuda:0, different from other tensors on cpu (when checking argument in method wrapper_CUDA_mm)
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?971ffbc6-0628-4fb2-b087-298859c1e6df) or open in a [text editor](command:workbench.action.openLargeOutput?971ffbc6-0628-4fb2-b087-298859c1e6df). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions