-
Notifications
You must be signed in to change notification settings - Fork 3
Description
File "E:\github\crowdcent\model.py", line 112, in objective model.fit( File "C:\Users\grumb\miniconda3\envs\crowdcent\Lib\site-packages\centimators\model_estimators\keras_estimators\sequence.py", line 47, in fit super().fit( File "C:\Users\grumb\miniconda3\envs\crowdcent\Lib\site-packages\centimators\model_estimators\keras_estimators\base.py", line 85, in fit X_np = _ensure_numpy(X) ^^^^^^^^^^^^^^^^ File "C:\Users\grumb\miniconda3\envs\crowdcent\Lib\site-packages\centimators\narwhals_utils.py", line 27, in _ensure_numpy return numpy.asarray(data) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\grumb\miniconda3\envs\crowdcent\Lib\site-packages\torch\_tensor.py", line 1211, in __array__ return self.numpy() ^^^^^^^^^^^^ TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
What puzzles me about this is: It does not matter what I pass into fit. I try to pass a numpy directly to circumvent the narwhals fallback in _utils line 27. But somewhere ... no idea where, torch converts this into a GPU tensor. Any Idea where I could look?