Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix torch version requirement for flashinfer compatibility #84

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

kofi-bhr
Copy link

@kofi-bhr kofi-bhr commented Mar 4, 2025

fix torch version compatibility issue with flashinfer wheels

the torch requirement in pyproject.toml (torch>=2.5.1) was incompatible with the flashinfer wheels which are built for torch2.4. this change updates the requirement to torch>=2.4.0,<2.5.0 to ensure users can install both packages without version conflicts.

this addresses issue #72

changes:

  • updated torch dependency in pyproject.toml
  • updated changelog to document the change

tested by:

  • verifying versions are compatible with the flashinfer wheel path in the readme

:)
kofi

@jakep-allenai
Copy link
Collaborator

jakep-allenai commented Mar 4, 2025

Probably smart, I want to test it shortly and get back to you!

The issue is that when you want to install sglang properly, you still need to do pip install "sglang[all]==0.4.2" --find-links https://flashinfer.ai/whl/cu124/torch2.4/flashin with the find-links, so I left it in as a separate step to install sglang after the main repo is installed.

@jakep-allenai
Copy link
Collaborator

Hmm, I tried and it's still doing the thing where it install torch 2.5.1 and then goes back to the other version.
When I try the latest sglang, it then goes to torch 2.6, and then back to 2.5.1, so I'm not too happy either way. Does it just completely not install for you unless you make this change?

@dillonroach
Copy link

For myself on linux with a 3090:
I was able to use torch == 2.5.1 in the toml if you then change the later pip install to sglang == 0.4.3 and find-links to https://flashinfer.ai/whl/cu124/torch2.5/flashinfer/ instead of the ../torch2.4/* version (you need 0.4.3 because 0.4.2 wants flashinfer 0.1.6 and 0.2.0 is all you find at the given link) - the ninja build at the end of all the mess failed to find nvcc following the general readme pattern, but a conda install -c conda-forge cuda-toolkit seems to have appeased the thing. Was at least enough to run the basic pipeline against a local pdf - I didn't try anything fancier or beyond that.

@Devcode518
Copy link

我自己在linux上用3090:我可以在toml中使用torch == 2.5.1,如果您随后将后来的pip安装更改为sglang == 0.4.3并找到https://flashinfer.ai/whl/cu124/torch2.5/flashinfer/而不是../torch2.4/*版本(你需要0.4.3因为0.4.2想要flashinfer 0.1.6和0.2.0是你在给定链接上找到的全部)-所有乱七八糟的结尾的ninja build未能找到遵循通用readme模式的nvcc,但是一个conda install -c conda-forge cuda-toolkit似乎平息了这件事。至少足以在本地pdf上运行基本的管道——我没有尝试任何更高级的东西。

Can I ask what your CUDA version is? My computer has CUDA 12.2, and it reports an error after installation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants