Skip to content

CUDA support and PyTorch integration #464

@Linux-cpp-lisp

Description

@Linux-cpp-lisp

Hi all,

I've been searching "optimized sparse tensor contractions" for a week and somehow only just found this... 😄

I'm curious what the current state of CUDA support is, and how onerous you think it would or wouldn't be to integrate this library with PyTorch. In particular, say I have some single large einsum in a PyTorch model that I want to accelerate, something like:

torch.einsum("zpui,pqrijk,zpqruvw,zqvj->zrwk")

where some of the tensors are dense and some have sparse dimensions with fixed sparsity structure.

I'm not worried about autodifferentiation — it would be simple to take the symbolic derivatives of einsums like this and feed them to TACO to generate separate compute kernels for the backwards pass. So my questions become:

  1. Is CUDA support mature enough for this kind of application?
  2. Is it possible to get the generated C/CUDA code from the python library in order to template it into PyTorch C++ extension code (for loading with https://pytorch.org/docs/stable/cpp_extension.html#torch.utils.cpp_extension.load_inline)?
  3. How difficult is it to fill the dense part of a mixed dense-sparse TACO tensor from PyTorch tensors?
  4. Is there any code already out there that works on any of these problems?

Thanks very much for your help and making this tool available!

Metadata

Metadata

Assignees

No one assigned

    Labels

    user questionIndicates a question from a user that needs a reply

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions