Skip to content

Conversation

@jinyangyuan-nvidia
Copy link
Collaborator

@jinyangyuan-nvidia jinyangyuan-nvidia commented Oct 28, 2025

Summary by CodeRabbit

Release Notes

  • Performance Improvements
    • Optimized profiling cache handling in the autotuner to reduce unnecessary tensor preparation operations.
    • Enhanced distributed execution for mixture-of-experts models with improved rank-aware token handling across parallel scenarios.

Description

This PR adds self.parallel_rank = model_config.mapping.tp_rank to the MoE class and uses it to access all_rank_chunk_size_list when chunked MoE is used.

The autotuner is also slightly modified to avoid unnecessary tensor allocations when cache hits.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 28, 2025

📝 Walkthrough

Walkthrough

The pull request introduces a parallel_rank attribute to the MoE class and updates fused MOE implementation modules to use it instead of direct self.rank or mapping.tp_rank references. Additionally, the autotuner is optimized to defer tensor preparation until a cache miss occurs.

Changes

Cohort / File(s) Summary
MoE Interface Enhancement
tensorrt_llm/_torch/modules/fused_moe/interface.py
Adds new public attribute parallel_rank to MoE class, initialized as alias to mapping.tp_rank during __init__
Fused MOE Rank Reference Updates
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py, fused_moe_deepgemm.py, fused_moe_trtllm_gen.py, fused_moe_wide_ep.py
Replaces multiple rank identifier usages (self.rank, self.mapping.tp_rank) with new self.parallel_rank attribute for indexing per-rank chunk sizes and output slicing in DP and distributed scenarios
Autotuner Optimization
tensorrt_llm/_torch/autotuner.py
Defers _prepare_input_tensors invocation until profiling cache miss, moving preparation from loop iteration start into conditional block to avoid unnecessary work on cache hits

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~15 minutes

  • Verify parallel_rank attribute initialization correctly references mapping.tp_rank in all fused MOE modules
  • Confirm rank reference replacements are semantically correct in each usage context (chunk sizing, output slicing, indexing operations)
  • Validate autotuner cache logic change preserves correctness and doesn't introduce race conditions with deferred tensor preparation

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.29% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title "[https://nvbugs/5613089][fix] Fix the rank to access all_rank_chunk_size_list when chunked MoE is used" is directly related to the main changes in the changeset. The title accurately describes the primary objective: fixing which rank identifier is used to access the chunk size list when chunked MoE is deployed. The changes across multiple MoE implementation files (fused_moe_cutlass.py, fused_moe_deepgemm.py, fused_moe_trtllm_gen.py, fused_moe_wide_ep.py, and interface.py) all center on replacing rank sources with a new parallel_rank attribute to properly index all_rank_chunk_size_list and related rank-indexed data structures. The title is clear, specific, and appropriately scoped.
Description Check ✅ Passed The PR description is mostly complete and follows the template structure. The author has provided a clear and concise "Description" section that explains both the main change (adding self.parallel_rank to the MoE class and using it to access all_rank_chunk_size_list) and a secondary optimization (autotuner modification to avoid unnecessary tensor allocations). The description section adequately conveys the intent and scope of the changes. However, the "Test Coverage" section is empty with no test information provided, and most items in the "PR Checklist" are unchecked. While these sections are not filled out, the core description content is substantive and sufficient to understand the PR's purpose and impact.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b37a8a9 and 3346541.

📒 Files selected for processing (6)
  • tensorrt_llm/_torch/autotuner.py (1 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (3 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py (2 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (2 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/interface.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
  • tensorrt_llm/_torch/autotuner.py
  • tensorrt_llm/_torch/modules/fused_moe/interface.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
  • tensorrt_llm/_torch/autotuner.py
  • tensorrt_llm/_torch/modules/fused_moe/interface.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
  • tensorrt_llm/_torch/autotuner.py
  • tensorrt_llm/_torch/modules/fused_moe/interface.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py
🧬 Code graph analysis (3)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)
tensorrt_llm/mapping.py (2)
  • rank (183-184)
  • rank (187-194)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)
tensorrt_llm/mapping.py (2)
  • rank (183-184)
  • rank (187-194)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py (1)
tensorrt_llm/mapping.py (2)
  • rank (183-184)
  • rank (187-194)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (10)
tensorrt_llm/_torch/autotuner.py (1)

609-609: LGTM! Good optimization to defer tensor preparation on cache hits.

The tensor preparation is now only performed when needed (on cache miss), avoiding unnecessary allocations and computational overhead when using cached profiling results.

tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1)

621-623: LGTM! Consistent use of parallel_rank for output slicing.

The change from self.mapping.tp_rank to self.parallel_rank aligns with the new attribute introduced in the base MoE class and ensures correct indexing into all_rank_num_tokens for the current rank's outputs.

tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (2)

846-846: Critical fix: Using correct rank for chunk size indexing.

This change from self.rank to self.parallel_rank fixes the bug mentioned in the PR title where all_rank_chunk_size_list was being indexed with the global rank instead of the TP rank. When using attention DP with chunking, the chunk size list is indexed by TP rank, not global rank.


934-935: LGTM! Consistent use of parallel_rank for output slicing.

The change from self.mapping.tp_rank to self.parallel_rank maintains consistency with the new attribute and ensures correct slicing of outputs based on the TP rank's token count.

tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py (2)

709-709: Critical fix: Using correct rank for chunk size indexing.

This change from self.rank to self.parallel_rank fixes the chunked MoE rank indexing bug, ensuring that all_rank_chunk_size_list is accessed with the TP rank instead of the global rank in attention DP scenarios.


781-782: LGTM! Consistent use of parallel_rank for output slicing.

Aligns with the changes across other MoE implementations to use self.parallel_rank for indexing all_rank_num_tokens.

tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (3)

585-585: Critical fix: Using correct rank for chunk size indexing.

This change from self.rank to self.parallel_rank fixes the chunked MoE rank indexing bug. When attention DP is enabled with chunking, the chunk size list must be indexed by TP rank, not global rank.


644-645: LGTM! Consistent use of parallel_rank for output slicing.

The change ensures correct slicing of outputs based on the TP rank's token count in attention DP scenarios.


673-673: LGTM! Consistent use of parallel_rank in forward_fake.

The change maintains consistency by using self.parallel_rank when determining the number of tokens for the current rank in fake forward mode with alltoall enabled.

tensorrt_llm/_torch/modules/fused_moe/interface.py (1)

172-172: mapping.tp_rank is correct for attention DP scenarios with chunking.

In attention DP with chunking, tokens are distributed across tensor parallelism (TP) ranks. The all_rank_num_tokens list is indexed by the general TP rank participating in attention computation, not by expert parallelism rank. Line 231 in the same file confirms this pattern: num_tokens = all_rank_num_tokens[self.mapping.tp_rank].

The distinction between tp_rank and moe_tp_rank is that tp_rank represents general tensor parallelism (used for distributing tokens, embeddings, and attention heads), while moe_tp_rank represents expert parallelism (used only for MOE expert weight splitting). For attention DP chunking, mapping.tp_rank is the correct rank to use.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22760 [ run ] triggered by Bot. Commit: 3346541

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22760 [ run ] completed with state SUCCESS. Commit: 3346541
/LLM/main/L0_MergeRequest_PR pipeline #17162 completed with status: 'FAILURE'

@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22828 [ run ] triggered by Bot. Commit: 626da42

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22828 [ run ] completed with state SUCCESS. Commit: 626da42
/LLM/main/L0_MergeRequest_PR pipeline #17219 completed with status: 'FAILURE'

@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22844 [ run ] triggered by Bot. Commit: 1b7f2bb

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22844 [ run ] completed with state SUCCESS. Commit: 1b7f2bb
/LLM/main/L0_MergeRequest_PR pipeline #17228 completed with status: 'FAILURE'

@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22961 [ run ] triggered by Bot. Commit: 7c03bcb

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22961 [ run ] completed with state SUCCESS. Commit: 7c03bcb
/LLM/main/L0_MergeRequest_PR pipeline #17312 completed with status: 'FAILURE'

…ize_list when chunked MoE is used

Signed-off-by: Jinyang Yuan <[email protected]>
@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23003 [ run ] triggered by Bot. Commit: 46731e0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants