llama.cpp server-cuda-b4800 Public Latest
Install from the command line
Learn more about packages
$ docker pull ghcr.io/ggml-org/llama.cpp:server-cuda-b4800
Recent tagged image versions
- 198 Version downloads
- 76 Version downloads
- 76 Version downloads
- 121 Version downloads
- 78 Version downloads
Loading
Sorry, something went wrong.
Details
-
ggml-org
- llama.cpp
- MIT License
- 75.7k stars
Last published
18 hours ago
Discussions
2.06K
Issues
733
Total downloads