Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 56.5k 9.7k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 1.9k 214

  3. recipes recipes Public

    Common recipes to run vLLM

    117 31

Repositories

Showing 10 of 21 repositories
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 56,452 Apache-2.0 9,703 1,791 (16 issues need help) 1,045 Updated Aug 27, 2025
  • aibrix Public

    Cost-efficient and pluggable Infrastructure components for GenAI inference

    vllm-project/aibrix’s past year of commit activity
    Go 4,098 Apache-2.0 435 216 (21 issues need help) 20 Updated Aug 27, 2025
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    vllm-project/vllm-spyre’s past year of commit activity
    Python 32 Apache-2.0 21 7 15 Updated Aug 27, 2025
  • vllm-gaudi Public

    Community maintained hardware plugin for vLLM on Intel Gaudi

    vllm-project/vllm-gaudi’s past year of commit activity
    Python 8 27 1 19 Updated Aug 27, 2025
  • ci-infra Public

    This repo hosts code for vLLM CI & Performance Benchmark infrastructure.

    vllm-project/ci-infra’s past year of commit activity
    HCL 17 33 0 8 Updated Aug 27, 2025
  • vllm-ascend Public

    Community maintained hardware plugin for vLLM on Ascend

    vllm-project/vllm-ascend’s past year of commit activity
    Python 1,043 Apache-2.0 377 350 (6 issues need help) 138 Updated Aug 27, 2025
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 537 Apache-2.0 71 60 (5 issues need help) 23 Updated Aug 27, 2025
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    vllm-project/flash-attention’s past year of commit activity
    Python 89 BSD-3-Clause 1,937 0 13 Updated Aug 27, 2025
  • production-stack Public

    vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization

    vllm-project/production-stack’s past year of commit activity
    Python 1,717 Apache-2.0 267 73 (3 issues need help) 45 Updated Aug 27, 2025
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 1,861 Apache-2.0 214 52 (7 issues need help) 31 Updated Aug 26, 2025