From e944b8b1cb74b01990fbf7301932752378520003 Mon Sep 17 00:00:00 2001 From: Matthias Date: Mon, 18 Aug 2025 14:12:42 +0200 Subject: [PATCH] Update hyperlink to llama.cpp build docs --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 382f7cbed..2f73d461d 100644 --- a/README.md +++ b/README.md @@ -59,7 +59,7 @@ pip install llama-cpp-python \ ### Installation Configuration -`llama.cpp` supports a number of hardware acceleration backends to speed up inference as well as backend specific options. See the [llama.cpp README](https://github.com/ggerganov/llama.cpp#build) for a full list. +`llama.cpp` supports a number of hardware acceleration backends to speed up inference as well as backend specific options. See the [llama.cpp README](https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md) for a full list. All `llama.cpp` cmake build options can be set via the `CMAKE_ARGS` environment variable or via the `--config-settings / -C` cli flag during installation.