Skip to content

llama.cpp server-cuda-b4800 Public Latest

Install from the command line
Learn more about packages
$ docker pull ghcr.io/ggml-org/llama.cpp:server-cuda-b4800

Recent tagged image versions

  • Published about 18 hours ago · Digest
    sha256:a25e03c56ce423b09b1a268fa8930b15287734b8cd7f9639e7d6fb1d03a83245
    198 Version downloads
  • Published about 18 hours ago · Digest
    sha256:703343ede4c95086ceb80caf548f93b15f99bb35a347988bc4f28c632e0d8523
    76 Version downloads
  • Published about 18 hours ago · Digest
    sha256:55a0efdcaf61e7aef72d80b7cb3df5a9f7a90e2deb4257a8beb01f7167de3f9a
    76 Version downloads
  • Published about 18 hours ago · Digest
    sha256:d2eb68ed63f0a154f33270ce2824b4460e9feb007d98f800d12ed32dc1e1a68d
    121 Version downloads
  • Published about 18 hours ago · Digest
    sha256:54153a45607ee84d9a3c0c3ae2a9a8a3f271c7ef5bd77a860d70024f7b807391
    78 Version downloads

Loading

Details


Last published

18 hours ago

Discussions

2.06K

Issues

733

Total downloads

41.6K