-
Notifications
You must be signed in to change notification settings - Fork 1
UPSTREAM PR #18479: support youtu-vl model #755
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Explore the complete analysis inside the Version Insights I've successfully retrieved the summary report for your project. The report shows that Pull Request #755 for the auroralabs-loci/llama.cpp repository has significant performance regressions: Key Highlights:
The report suggests investigating changes to data structure usage patterns and iterator efficiency, as this PR appears to introduce substantial performance regressions that should be addressed before merging. Would you like more detailed information about any specific aspect of this report? |
a55e7b6 to
b0bb6d6
Compare
7816c41 to
bb623bb
Compare
|
Explore the complete analysis inside the Version Insights I've successfully retrieved the summary report for your llama.cpp project (PR #755). The report shows the top 10 functions with the most significant performance changes between the base and target versions. Key Highlights:
Would you like more detailed analysis on any specific function or aspect of this performance report? |
5c1f0b4 to
03ffde7
Compare
048ad94 to
6c1fde6
Compare
Mirrored from ggml-org/llama.cpp#18479
Make sure to read the contributing guidelines before submitting a PR