-
Notifications
You must be signed in to change notification settings - Fork 14.7k
support youtu-vl model #18479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support youtu-vl model #18479
Conversation
ngxson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also wait for @CISC @ggerganov reviews for libllama changes
CISC
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM after whitespace fixes
|
So should I revert to the previous state, or leave it as it is? |
Revert, but don't touch |
|
Perfect. |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
Thanks. I will change rsplit to split in chat_template.json. |
Don't, just use it properly instead, as I suggested here: https://huggingface.co/tencent/Youtu-LLM-2B/discussions/1 |
|
@ngxson @ggerganov |
|
Lint CI fails, please fix before we can merge. |
LOL, I think it just picks up the previous @f291400 Try rebasing, should fix the CI. |
|
If it's fixed on Beside, @f291400 if you want reviews to be fast and efficient, read the contribution guidelines and validate your changes carefully. This PR is create from your master branch, maintainers cannot push fixes directly here; the PR is also moved 2-3 times which make our work extremely inefficient. |
I ran |
|
Thank you for acknowledging this submission, which will greatly advance the use of youtu-llm on llama.cpp. |
|
I attempted to merge via GH web UI but failed, so unfortunately you need to fix the merge conflict yourself @f291400 |
So, it seems GitHub started allowing this now? |
No idea, probably allowed via web UI only? I never have problem applying patches via web UI. But if I do |
I'm pretty sure merging from |
* Support Youtu-VL Model * merge code * fix bug * revert qwen2 code & support rsplit in minja.hpp * update warm info * fix annotation * u * revert minja.hpp * fix * Do not write routed_scaling_factor to gguf when routed_scaling_factor is None * fix expert_weights_scale * LGTM after whitespace fixes * fix * fix * fix * layers to layer_index * enum fix --------- Co-authored-by: Xuan-Son Nguyen <[email protected]> Co-authored-by: Sigbjørn Skjæret <[email protected]>
* Support Youtu-VL Model * merge code * fix bug * revert qwen2 code & support rsplit in minja.hpp * update warm info * fix annotation * u * revert minja.hpp * fix * Do not write routed_scaling_factor to gguf when routed_scaling_factor is None * fix expert_weights_scale * LGTM after whitespace fixes * fix * fix * fix * layers to layer_index * enum fix --------- Co-authored-by: Xuan-Son Nguyen <[email protected]> Co-authored-by: Sigbjørn Skjæret <[email protected]>
* Support Youtu-VL Model * merge code * fix bug * revert qwen2 code & support rsplit in minja.hpp * update warm info * fix annotation * u * revert minja.hpp * fix * Do not write routed_scaling_factor to gguf when routed_scaling_factor is None * fix expert_weights_scale * LGTM after whitespace fixes * fix * fix * fix * layers to layer_index * enum fix --------- Co-authored-by: Xuan-Son Nguyen <[email protected]> Co-authored-by: Sigbjørn Skjæret <[email protected]>
Make sure to read the contributing guidelines before submitting a PR