Skip to content
@kvcache-ai

kvcache.ai

KVCache.AI is a joint research project between MADSys and top industry collaborators, focusing on efficient LLM serving.

Pinned Loading

  1. Mooncake Mooncake Public

    Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.

    C++ 4.2k 410

  2. ktransformers ktransformers Public

    A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations

    Python 15.2k 1.1k

  3. TrEnv-X TrEnv-X Public

    Go 63 1

Repositories

Showing 9 of 9 repositories
  • sglang Public Forked from sgl-project/sglang

    SGLang is a fast serving framework for large language models and vision language models.

    kvcache-ai/sglang’s past year of commit activity
    Python 2 Apache-2.0 3,208 0 1 Updated Oct 29, 2025
  • sglang_awq Public Forked from sgl-project/sglang

    SGLang is a fast serving framework for large language models and vision language models.

    kvcache-ai/sglang_awq’s past year of commit activity
    Python 0 Apache-2.0 3,201 0 0 Updated Oct 28, 2025
  • Mooncake Public

    Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.

    kvcache-ai/Mooncake’s past year of commit activity
    C++ 4,168 Apache-2.0 410 165 (6 issues need help) 45 Updated Oct 28, 2025
  • ktransformers Public

    A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations

    kvcache-ai/ktransformers’s past year of commit activity
    Python 15,239 Apache-2.0 1,097 630 19 Updated Oct 27, 2025
  • TrEnv-X Public
    kvcache-ai/TrEnv-X’s past year of commit activity
    Go 63 Apache-2.0 1 0 0 Updated Sep 15, 2025
  • sglang-npu Public Forked from sgl-project/sglang

    SGLang is a fast serving framework for large language models and vision language models.

    kvcache-ai/sglang-npu’s past year of commit activity
    Python 0 Apache-2.0 3,208 0 0 Updated Aug 12, 2025
  • DeepEP_fault_tolerance Public Forked from deepseek-ai/DeepEP

    DeepEP: an efficient expert-parallel communication library that supports fault tolerance

    kvcache-ai/DeepEP_fault_tolerance’s past year of commit activity
    Cuda 2 MIT 973 0 0 Updated Jul 31, 2025
  • custom_flashinfer Public Forked from flashinfer-ai/flashinfer

    FlashInfer: Kernel Library for LLM Serving

    kvcache-ai/custom_flashinfer’s past year of commit activity
    Cuda 5 Apache-2.0 546 0 0 Updated Jul 24, 2025
  • vllm Public Forked from vllm-project/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    kvcache-ai/vllm’s past year of commit activity
    Python 14 Apache-2.0 10,961 0 0 Updated Mar 27, 2025

Top languages

Loading…

Most used topics

Loading…