Skip to content

llama.cpp buildcache-cuda Public Latest

Install from the command line
Learn more about packages
$ docker pull ghcr.io/ggml-org/llama.cpp:buildcache-cuda

Recent tagged image versions

  • Published 25 minutes ago · Digest
    sha256:67b798ff1ed266c2162fea98d3b536ed1c2d067ac7fda38cfd2dff9fcba5b003
    0 Version downloads
  • Published 25 minutes ago · Digest
    sha256:765b4afb5a11437fca74fb39cae02ccf2236dad87f8a24e872a029ed4389b225
    25 Version downloads
  • Published 26 minutes ago · Digest
    sha256:f68601a8e184c7a853131a9876a7f4d68fae0376240f323c0ea4ebcced234692
    0 Version downloads
  • Published 26 minutes ago · Digest
    sha256:3ac9cb5642e7b760babcc412c4c30ecf6284357f0483f14f33c43976e5331d32
    2 Version downloads
  • Published 35 minutes ago · Digest
    sha256:f07f43a9c463e24ce213c26c793affb89f5a7c3a67bc382d6ecb1f5d1d5120b4
    0 Version downloads

Loading

Details


Last published

25 minutes ago

Discussions

2.7K

Issues

892

Total downloads

722K