Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[Backend] Add Llamacpp backend (#2975)
* Add llamacpp backend Signed-off-by: Adrien Gallouët <[email protected]> * Get rid of llama_batch_get_one() Signed-off-by: Adrien Gallouët <[email protected]> * Use max_batch_total_tokens Signed-off-by: Adrien Gallouët <[email protected]> * Handle max_batch_size Signed-off-by: Adrien Gallouët <[email protected]> * Add some input validation checks Signed-off-by: Adrien Gallouët <[email protected]> * Handle ctx args & fix sampling Signed-off-by: Adrien Gallouët <[email protected]> * Add GPU args Signed-off-by: Adrien Gallouët <[email protected]> * Add --defrag-threshold Signed-off-by: Adrien Gallouët <[email protected]> * Add a stupid batch mechanism Signed-off-by: Adrien Gallouët <[email protected]> * Cleanup Signed-off-by: Adrien Gallouët <[email protected]> * Add --numa Signed-off-by: Adrien Gallouët <[email protected]> * Fix args Signed-off-by: Adrien Gallouët <[email protected]> * Enable flash attention by default Signed-off-by: Adrien Gallouët <[email protected]> * Add --offload-kqv Signed-off-by: Adrien Gallouët <[email protected]> * Fix batch_pos Signed-off-by: Adrien Gallouët <[email protected]> * backend(llama): add CUDA Dockerfile_llamacpp for now * Only export the latest logits Signed-off-by: Adrien Gallouët <[email protected]> * Output real logprobs Signed-off-by: Adrien Gallouët <[email protected]> * Fix batching Signed-off-by: Adrien Gallouët <[email protected]> * Fix seq iterations Signed-off-by: Adrien Gallouët <[email protected]> * Auto-detect n_threads when not provided Signed-off-by: Adrien Gallouët <[email protected]> * Clear request cache after completion Signed-off-by: Adrien Gallouët <[email protected]> * Remove warmup Signed-off-by: Adrien Gallouët <[email protected]> * Cleanup Signed-off-by: Adrien Gallouët <[email protected]> * backend(llama): add CUDA architectures build argument for Dockerfile * Add specific args for batch Signed-off-by: Adrien Gallouët <[email protected]> * Add --type-v & --type-k Signed-off-by: Adrien Gallouët <[email protected]> * Bump llamacpp to b4623 Signed-off-by: Adrien Gallouët <[email protected]> * Disable graceful shutdown in debug mode Signed-off-by: Adrien Gallouët <[email protected]> * Update Dockerfile_llamacpp Signed-off-by: Adrien Gallouët <[email protected]> * Cleanup Dockerfile Signed-off-by: Adrien Gallouët <[email protected]> * Update Cargo.lock Signed-off-by: Adrien Gallouët <[email protected]> * Update args Signed-off-by: Adrien Gallouët <[email protected]> * Simplify batching logic Signed-off-by: Adrien Gallouët <[email protected]> * Set TGI_LLAMA_PKG_CUDA from CUDA_VERSION Signed-off-by: Adrien Gallouët <[email protected]> * Rename bindings Signed-off-by: Adrien Gallouët <[email protected]> * Remove n_ctx Signed-off-by: Adrien Gallouët <[email protected]> * Make max_batch_total_tokens optional Signed-off-by: Adrien Gallouët <[email protected]> * Ensure all samplers are freed on error Signed-off-by: Adrien Gallouët <[email protected]> * Initialize penalty_last_n with llamacpp default value Signed-off-by: Adrien Gallouët <[email protected]> * Cleanup Signed-off-by: Adrien Gallouët <[email protected]> * Improve default settings Signed-off-by: Adrien Gallouët <[email protected]> * Add doc Signed-off-by: Adrien Gallouët <[email protected]> * Update docs Signed-off-by: Adrien Gallouët <[email protected]> * Thanks clippy Signed-off-by: Adrien Gallouët <[email protected]> * Thanks cargo fmt Signed-off-by: Adrien Gallouët <[email protected]> * Update docs Signed-off-by: Adrien Gallouët <[email protected]> * Do not use HOSTNAME env Signed-off-by: Adrien Gallouët <[email protected]> * Bump llama.cpp & cuda Signed-off-by: Adrien Gallouët <[email protected]> * Fix requirements.txt Signed-off-by: Adrien Gallouët <[email protected]> * Fix fmt Signed-off-by: Adrien Gallouët <[email protected]> * Enable KQV offload by default Signed-off-by: Adrien Gallouët <[email protected]> * Remove Ngrok tunneling Signed-off-by: Adrien Gallouët <[email protected]> * Remove .cargo/config.toml Signed-off-by: Adrien Gallouët <[email protected]> * Fix Dockerfile Signed-off-by: Adrien Gallouët <[email protected]> * Add missing cuda prefix Signed-off-by: Adrien Gallouët <[email protected]> * Handle custom llama.cpp dir Signed-off-by: Adrien Gallouët <[email protected]> * Cleanup Signed-off-by: Adrien Gallouët <[email protected]> * Add README.md Signed-off-by: Adrien Gallouët <[email protected]> * Add HF transfer Signed-off-by: Adrien Gallouët <[email protected]> * Fix bool args Signed-off-by: Adrien Gallouët <[email protected]> * Update doc Signed-off-by: Adrien Gallouët <[email protected]> * Update doc Signed-off-by: Adrien Gallouët <[email protected]> --------- Signed-off-by: Adrien Gallouët <[email protected]> Co-authored-by: Morgan Funtowicz <[email protected]>
- Loading branch information