Skip to content

Breaks with tensors on XPU device #7

@a-One-Fan

Description

@a-One-Fan

Partially a Comfy issue, but I figured I might as well report this here instead.

import torch
import comfy_kitchen as ck
x = torch.randn(128, 256, device='xpu', dtype=torch.bfloat16)
ck.quantize_per_tensor_fp8(x, torch.Tensor((1.0,)).to('xpu'))

Errors out as XPU devices are not allowed for eager.

Stack trace
python -c "import torch; import comfy_kitchen as ck; x = torch.randn(128, 256, device='xpu', dtype=torch.bfloat16); ck.quantize_per_tensor_fp8(x, torch.Tensor((1.0,)).to('xpu'))"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\Vikto\Desktop\Tools\Github_Repos\comfytest2\Comfy_Intel\cenv\lib\site-packages\comfy_kitchen\__init__.py", line 65, in quantize_per_tensor_fp8
    return torch.ops.comfy_kitchen.quantize_fp8(x, scale, dtype_code)
  File "C:\Users\Vikto\Desktop\Tools\Github_Repos\comfytest2\Comfy_Intel\cenv\lib\site-packages\torch\_ops.py", line 1243, in __call__
    return self._op(*args, **kwargs)
  File "C:\Users\Vikto\Desktop\Tools\Github_Repos\comfytest2\Comfy_Intel\cenv\lib\site-packages\torch\_library\autograd.py", line 111, in autograd_impl
    result = forward_no_grad(*args, Metadata(keyset, keyword_only_args))
  File "C:\Users\Vikto\Desktop\Tools\Github_Repos\comfytest2\Comfy_Intel\cenv\lib\site-packages\torch\_library\autograd.py", line 40, in forward_no_grad
    result = op.redispatch(keyset & _C._after_autograd_keyset, *args, **kwargs)
  File "C:\Users\Vikto\Desktop\Tools\Github_Repos\comfytest2\Comfy_Intel\cenv\lib\site-packages\torch\_ops.py", line 836, in redispatch
    return self._handle.redispatch_boxed(keyset, *args, **kwargs)  # type: ignore[return-value]
  File "C:\Users\Vikto\Desktop\Tools\Github_Repos\comfytest2\Comfy_Intel\cenv\lib\site-packages\torch\_library\custom_ops.py", line 344, in backend_impl
    result = self._backend_fns[device_type](*args, **kwargs)
  File "C:\Users\Vikto\Desktop\Tools\Github_Repos\comfytest2\Comfy_Intel\cenv\lib\site-packages\torch\_compile.py", line 53, in inner
    return disable_fn(*args, **kwargs)
  File "C:\Users\Vikto\Desktop\Tools\Github_Repos\comfytest2\Comfy_Intel\cenv\lib\site-packages\torch\_dynamo\eval_frame.py", line 929, in _fn
    return fn(*args, **kwargs)
  File "C:\Users\Vikto\Desktop\Tools\Github_Repos\comfytest2\Comfy_Intel\cenv\lib\site-packages\torch\_library\custom_ops.py", line 377, in wrapped_fn
    return fn(*args, **kwargs)
  File "C:\Users\Vikto\Desktop\Tools\Github_Repos\comfytest2\Comfy_Intel\cenv\lib\site-packages\comfy_kitchen\backends\eager\quantization.py", line 211, in _op_quantize_fp8
    impl = registry.get_implementation("quantize_per_tensor_fp8", kwargs=kwargs)
  File "C:\Users\Vikto\Desktop\Tools\Github_Repos\comfytest2\Comfy_Intel\cenv\lib\site-packages\comfy_kitchen\registry.py", line 269, in get_implementation
    selected_backend = self.get_capable_backend(func_name, kwargs)
  File "C:\Users\Vikto\Desktop\Tools\Github_Repos\comfytest2\Comfy_Intel\cenv\lib\site-packages\comfy_kitchen\registry.py", line 233, in get_capable_backend
    raise NoCapableBackendError(func_name, failures)
comfy_kitchen.exceptions.NoCapableBackendError: No backend can handle 'quantize_per_tensor_fp8': eager: x: device xpu not in {'cuda', 'cpu'}

comfy-kitchen/eager/__init__.py, line 30

all_devices = frozenset({"cuda", "cpu"}) -> all_devices = frozenset({"cuda", "cpu", "xpu", "mps", "others idk"}) ? And maybe triton too?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions