We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, I'm a big fan of candle so I implemented the LLMs below in my repo forked from candle:
Furthermore, I published Qwen2.5 Instruct demo in my huggingface space.
Can I add these qwen models into candle-examples and candle-wasm-examples?
candle-examples
candle-wasm-examples
The text was updated successfully, but these errors were encountered:
Sure, feel free to make a PR to add these!
Sorry, something went wrong.
Hello, does the quantized version as quick as llama.cpp on macos?
@lucasjinreal For sure, the quantized version is in the #2797
No branches or pull requests
Hi, I'm a big fan of candle so I implemented the LLMs below in my repo forked from candle:
Furthermore, I published Qwen2.5 Instruct demo in my huggingface space.
Can I add these qwen models into
candle-examples
andcandle-wasm-examples
?The text was updated successfully, but these errors were encountered: