-
Notifications
You must be signed in to change notification settings - Fork 10.9k
Issues: ggml-org/llama.cpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Compile bug: issue compiling in ubuntu (desktop and server version) using virtualbox
bug-unconfirmed
#12164
opened Mar 3, 2025 by
sandboxyer
Misc. bug: Calculating the position of kv cache error in llama sever
bug-unconfirmed
#12160
opened Mar 3, 2025 by
Clauszy
Eval bug: The answers have some problems with the example/llama.android
bug-unconfirmed
#12158
opened Mar 3, 2025 by
chtfrank
CUDA: HIP: maintain_cuda_graph use of cudaGraphKernelNodeGetParams is incorrect.
#12152
opened Mar 2, 2025 by
IMbackK
Misc. bug: Q4_0 repacking results in double RAM usage
bug-unconfirmed
#12149
opened Mar 2, 2025 by
bartowski1182
Misc. bug: gguf-dump 'newbyteorder' was removed
bug-unconfirmed
#12146
opened Mar 2, 2025 by
dlippold
Feature Request: Implement Qwen2Model
enhancement
New feature or request
#12142
opened Mar 2, 2025 by
wqerrewetw
4 tasks done
Feature Request: Enable cuda 11.4 and cuda arch 3.7
enhancement
New feature or request
#12140
opened Mar 2, 2025 by
ChunkyPanda03
4 tasks done
Feature Request: Proposing User-Customizable RAG Integration in llama.cpp: A Path to Enhanced Contextual Retrieval
enhancement
New feature or request
#12129
opened Mar 1, 2025 by
gnusupport
4 tasks done
Feature Request:
enhancement
New feature or request
#12128
opened Mar 1, 2025 by
gnusupport
4 tasks done
Eval bug: In RISC-V, output tokens are broken
bug-unconfirmed
#12124
opened Mar 1, 2025 by
op21beyond
Eval bug: Error running Phi4-mini gguf: unknown pre-tokenizer type: 'gpt-4o'
bug-unconfirmed
#12122
opened Mar 1, 2025 by
crisdesivo
Misc. bug: Server web UI: Complete output is lost due to the „normal“ context shift message
bug-unconfirmed
#12120
opened Feb 28, 2025 by
Optiuse
Feature Request: Support for Phi4MMForCausalLM Architecture
enhancement
New feature or request
#12117
opened Feb 28, 2025 by
ns3284
4 tasks done
Misc. bug: While running llama-simple-chat, it throws "context size exceeded"
bug-unconfirmed
#12113
opened Feb 28, 2025 by
emmetra
Eval bug: llama-cpp-deepseek-r1.jinja template will miss the <think> tag
bug-unconfirmed
#12107
opened Feb 28, 2025 by
Sherlock-Holo
Compile bug: fatal error: 'ggml.h' file not found
bug-unconfirmed
#12101
opened Feb 28, 2025 by
peekaboolabs-appdev
Eval bug: llama.cpp returns gibberish on Intel Core Ultra 7 (155H) with ARC iGPU
bug-unconfirmed
#12096
opened Feb 27, 2025 by
cgruver
Compile bug: Failed to compile on centos8 system
bug-unconfirmed
#12092
opened Feb 27, 2025 by
hbuxiaofei
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.