Skip to content

[vllm] fix: ignore MoE router layers for FP8 quantization#5107

Merged
ISEEKYAN merged 3 commits intoverl-project:mainfrom
zpqiu:fix-fp8-moe-router
Feb 5, 2026
Merged

[vllm] fix: ignore MoE router layers for FP8 quantization#5107
ISEEKYAN merged 3 commits intoverl-project:mainfrom
zpqiu:fix-fp8-moe-router

Conversation

@zpqiu
Copy link
Contributor

@zpqiu zpqiu commented Jan 29, 2026

What does this PR do?

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

  • Add logic to ignore MoE router layers (model.layers.{i}.mlp.gate) during FP8 quantization
  • Remove unused FP8_BLOCK_QUANT_KWARGS constant from vllm_fp8_utils.py

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: [sglang] fix: skip MoE router layers for FP8 quantization #5122
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

@sharonyu-115
Copy link
Contributor

For reference, here is the before and after comparison result:

image

Orange: BF16 rollout + BF16 train
Purple: FP8 rollout (with router quantization) + BF16 train
Red: FP8 rollout (without router quantization) + BF16 train - much lower mismatch KL

@zpqiu zpqiu marked this pull request as ready for review February 4, 2026 14:32
zpqiu added 3 commits February 4, 2026 06:43
Signed-off-by: Zhaopeng Qiu <alexq@nvidia.com>
Signed-off-by: Zhaopeng Qiu <alexq@nvidia.com>
Signed-off-by: Zhaopeng Qiu <alexq@nvidia.com>
@zpqiu zpqiu force-pushed the fix-fp8-moe-router branch from 4b0bdb4 to aade46d Compare February 4, 2026 14:44
@ISEEKYAN ISEEKYAN merged commit 597b63f into verl-project:main Feb 5, 2026
64 of 69 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants