Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I add Qwen Instruct models? #2792

Open
ITHwang opened this issue Mar 1, 2025 · 3 comments
Open

Can I add Qwen Instruct models? #2792

ITHwang opened this issue Mar 1, 2025 · 3 comments

Comments

@ITHwang
Copy link

ITHwang commented Mar 1, 2025

Hi, I'm a big fan of candle so I implemented the LLMs below in my repo forked from candle:

  • Qwen2-Instruct models including quantized version
  • Qwen2.5-Instruct models including quantized version

Furthermore, I published Qwen2.5 Instruct demo in my huggingface space.

Can I add these qwen models into candle-examples and candle-wasm-examples?

@LaurentMazare
Copy link
Collaborator

Sure, feel free to make a PR to add these!

@lucasjinreal
Copy link

Hello, does the quantized version as quick as llama.cpp on macos?

@ITHwang
Copy link
Author

ITHwang commented Mar 8, 2025

@lucasjinreal
For sure, the quantized version is in the #2797

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants