-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode #9308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
/bot run --stage-list "DGX_H100-2_GPUs-PyTorch-Others-1, DGX_H100-4_GPUs-CPP-1, DGX_H100-4_GPUs-PyTorch-DeepSeek-1, DGX_H100-4_GPUs-PyTorch-Others-1" |
|
PR_Github #25072 [ run ] triggered by Bot. Commit: |
|
PR_Github #25072 [ run ] completed with state |
|
/bot run |
|
PR_Github #25199 [ run ] triggered by Bot. Commit: |
|
PR_Github #25199 [ run ] completed with state |
|
/bot run |
|
PR_Github #25221 [ run ] triggered by Bot. Commit: |
|
PR_Github #25221 [ run ] completed with state |
839e731 to
767d19d
Compare
|
/bot run |
|
PR_Github #25338 [ run ] triggered by Bot. Commit: |
📝 WalkthroughWalkthroughThis PR modifies in-flight request tracking and pause logic across multiple components. Changes include capturing and logging pause operation return values, refactoring context and generation request handling into separate loops, adjusting sequence calculation metrics, and restructuring batch state management to explicitly track finished context requests through the PyExecutor pipeline. Changes
Sequence Diagram(s)sequenceDiagram
participant Pipeline as PyExecutor Pipeline
participant Add as _add_inflight_ids
participant BatchState as BatchStatePP
participant Remove as _remove_inflight_ids
participant Finalize as Batch Finalization
Pipeline->>Add: Call with current requests
Add->>Add: Collect finished_ctx_reqs from context
Add->>Add: Insert all requests into inflight tracking
Add-->>Pipeline: Return finished_ctx_reqs
Pipeline->>BatchState: Create new BatchStatePP(finished_ctx_reqs)
Pipeline->>Pipeline: Process microbatch
Pipeline->>Finalize: Finalize previous batch
Finalize->>Finalize: Reset context_requests to<br/>finished_ctx_reqs
Finalize->>Remove: Call with previous_batch (BatchStatePP)
Remove->>Remove: Use batch_state.finished_ctx_reqs<br/>+ scheduled_requests
Remove->>Remove: Remove from inflight tracking
Finalize-->>Pipeline: Batch complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)
106-109: Clarifyfinished_ctx_reqssemantics inBatchStatePPThe extra field is straightforward, but its meaning (“subset of
scheduled_ctx_reqsthat hit the last context chunk in this microbatch”) is non‑obvious from the type alone. A brief comment here (or at the construction site in_executor_loop_pp) would make the PP bookkeeping easier to follow and reduce the risk of future misuse.Also, if you ever construct
BatchStatePPoutside_executor_loop_pp, consider adefault_factory=listinstead ofNoneforfinished_ctx_reqsto avoid accidentalNoneiteration later. Right now all call sites pass an explicit list, so this is purely defensive.
2488-2502: Inflight set handling for context chunks and generations looks right; consider documenting invariantsThe revised inflight logic:
- In
_add_inflight_ids:
- Only context requests with
req.is_last_context_chunkare inserted intoself.inflight_req_idsand collected intofinished_ctx_reqs.- All generation requests are still inserted.
- In
_remove_inflight_ids:
- Context IDs are erased from
self.inflight_req_idsusingbatch_state.finished_ctx_reqs.- Generation IDs are erased using
batch_state.sample_state.scheduled_requests.generation_requests.This gives a clean lifecycle:
- Non‑final context chunks are never in the inflight set → scheduler can keep scheduling further context chunks while earlier ones are in the PP pipeline.
- Final context chunk and generation requests are marked inflight at queue time.
- Once PP communication and response handling for that microbatch complete, they are removed via
_remove_inflight_ids.Mechanically this looks correct and aligns with the C++ changes described in the PR. Given how easy it is to break this invariant later, I’d suggest:
- Adding a brief docstring note that
finished_ctx_reqsmust exactly mirror the context IDs inserted here, and that non‑final context chunks are intentionally excluded from the inflight set.- Optionally, clarifying whether
request_idvspy_request_idis the canonical ID forReqIdsSet, since logs and other code paths mostly usepy_request_id. Even a one‑line comment at theReqIdsSetinitialization would avoid confusion.Also applies to: 2503-2514
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cpp/tensorrt_llm/batch_manager/pauseRequests.cpp(1 hunks)cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp(1 hunks)tensorrt_llm/_torch/pyexecutor/_util.py(1 hunks)tensorrt_llm/_torch/pyexecutor/py_executor.py(6 hunks)
🧰 Additional context used
🧠 Learnings (7)
📚 Learning: 2025-08-20T06:56:02.889Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:577-579
Timestamp: 2025-08-20T06:56:02.889Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, maxSequenceLength is now enforced as a non-optional argument in the BlockManager constructor, so concerns about std::nullopt defaulting to 0 are not applicable. When windowSize > maxSequenceLength, a warning should be added instead of handling optional parameter cases.
Applied to files:
tensorrt_llm/_torch/pyexecutor/_util.pycpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
📚 Learning: 2025-08-26T06:07:02.166Z
Learnt from: shaharmor98
Repo: NVIDIA/TensorRT-LLM PR: 7231
File: tensorrt_llm/_torch/pyexecutor/_util.py:504-509
Timestamp: 2025-08-26T06:07:02.166Z
Learning: In tensorrt_llm/_torch/pyexecutor/_util.py, when calling model_engine.set_lora_model_config(), pass model_binding_config.mlp_hidden_size directly without multiplying by mapping.tp_size, as the mlp_hidden_size from get_bindings_model_config() is already the per-TP rank value needed for LoRA weight packaging.
Applied to files:
tensorrt_llm/_torch/pyexecutor/_util.py
📚 Learning: 2025-08-21T09:41:49.347Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:2010-2045
Timestamp: 2025-08-21T09:41:49.347Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is specifically for updating bookkeeping when blocks are added during the context phase, not for refreshing offsets after detach operations. During detach operations, GenerationRequest::removeFrontBlock handles the necessary cache block bookkeeping internally.
Applied to files:
cpp/tensorrt_llm/batch_manager/pauseRequests.cppcpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
📚 Learning: 2025-08-20T06:48:45.368Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h:0-0
Timestamp: 2025-08-20T06:48:45.368Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is only called when adding a sequence, not during detach operations. During detach, the cache block bookkeeping is handled by GenerationRequest::removeFrontBlock.
Applied to files:
cpp/tensorrt_llm/batch_manager/pauseRequests.cppcpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
📚 Learning: 2025-08-06T08:18:28.669Z
Learnt from: zhengd-nv
Repo: NVIDIA/TensorRT-LLM PR: 6633
File: cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp:145-155
Timestamp: 2025-08-06T08:18:28.669Z
Learning: In cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp, the existing `mMtxForMap` mutex in DataSenderImpl is sufficient to synchronize measurement file operations in the `release` method, as all file operations occur within the same critical section that protects the `mRequestToSession` map access.
Applied to files:
cpp/tensorrt_llm/batch_manager/pauseRequests.cppcpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
📚 Learning: 2025-08-15T06:46:54.897Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:54.897Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp addToken function, newly allocated blocks are unshared by design. The beam search path in addToken (when sequence.getNumTokens() > windowSize) is currently broken/non-functional with SWA, so the block allocation doesn't follow a shared-then-unshared pattern.
Applied to files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.
Applied to files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
🧬 Code graph analysis (2)
tensorrt_llm/_torch/pyexecutor/_util.py (3)
tests/unittest/llmapi/apps/_test_openai_misc.py (2)
max_batch_size(30-31)max_seq_len(37-38)tensorrt_llm/_torch/models/checkpoints/base_weight_mapper.py (1)
mapping(152-153)tensorrt_llm/_torch/distributed/communicator.py (1)
pp_size(59-60)
tensorrt_llm/_torch/pyexecutor/py_executor.py (3)
tensorrt_llm/_torch/pyexecutor/llm_request.py (1)
LlmRequest(437-662)tensorrt_llm/logger.py (1)
debug(144-145)tests/unittest/_torch/sampler/test_torch_sampler.py (4)
context_requests(436-447)context_requests(1007-1009)generation_requests(450-460)generation_requests(1012-1014)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (4)
tensorrt_llm/_torch/pyexecutor/_util.py (1)
677-680: LGTM! Calculation correctly scales max_num_sequences for pipeline parallelism.The computation
max_num_sequences = max_batch_size * mapping.pp_sizeappropriately accounts for the fact that each pipeline stage can process its own batch of requests. This aligns with the PR's goal of enabling context chunk overlap in pipeline parallel mode and is consistent with the usage increate_torch_sampler_args(line 828).cpp/tensorrt_llm/batch_manager/pauseRequests.cpp (1)
53-54: LGTM! Useful logging enhancement for inflight tracking.Capturing and logging the return value of
erase()provides helpful visibility into whether a request was actually present in the inflight set when removed. This observability improvement aligns well with the PR's changes to inflight request tracking.cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (1)
1052-1068: LGTM! Core chunking overlap logic correctly implemented.The restructured inflight tracking properly implements the PR's objective:
- Context requests: Only added to inflight when
isLastContextChunk()is true, allowing non-final chunks to be rescheduled in subsequent micro-batches.- Generation requests: Always added to inflight to prevent concurrent scheduling.
The split into separate loops with explicit logging makes the behavior clear and aligns with the documented performance improvements for chunked context processing in pipeline parallel mode.
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
837-844:finished_ctx_reqspropagation through PP loop is coherent but subtly changes who is “scheduled”The new flow:
- On queue: log queueability, call
_add_inflight_idsand capturefinished_ctx_reqs.- Build
BatchStatePPwith bothscheduled_ctx_reqs(full ctx list) andfinished_ctx_reqs(last‑chunk subset).- In Stage 3, overwrite
sample_state.scheduled_requests.context_requestswithprevious_batch.finished_ctx_reqsbefore:
_update_requests(previous_batch.sample_state),_handle_logits_communication(previous_batch, ...),resource_manager.update_resources(previous_scheduled_batch, ...),_process_iter_stats(...)(viabatch_state.sample_state.scheduled_requests),- and finally
_remove_inflight_ids(previous_batch).This matches the PR intent: only context requests that have reached their last chunk are treated as “inflight” at the decoder level and then removed once PP communication completes, while intermediate chunks stay out of the inflight set and can be rescheduled.
The trade‑off is that for context‑chunked requests, intermediate chunks are now invisible to:
- per‑iteration
RequestStats.scheduledcomputation, andresource_manager.update_resources(only the last chunk for a given request is seen there).If that’s intentional (i.e., stats and resource updates are supposed to be per‑request, not per‑chunk), it would be worth adding a short comment around the reassignment to make this clear to future readers; otherwise, you may want to reconsider whether some parts (e.g., resource updates or stats) should still see the full
scheduled_ctx_reqsinstead of onlyfinished_ctx_reqs.Also applies to: 897-904, 956-960, 992-993
pcastonguay
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm. Is there a test we could write to verify the changes work as expected? For example, check the number of iterations required to process the request.
|
PR_Github #25338 [ run ] completed with state |
achartier
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
- For context chunks there is no dependency on the results of the last pipeline rank, so they can be scheduled in each iteration. - To achieve this context requests that are chunking are not added to inflight set, so they are scheduled in the next micro batch. - Context requests that reach the last context chunk are added to inflight set, so they are not scheduled in the next micro batch and generation can run without overlap. - Enhanced logging for inflight set management. Signed-off-by: Robin Kobus <[email protected]>
Signed-off-by: Robin Kobus <[email protected]>
- Added `finished_ctx_reqs` to `BatchStatePP` to track completed context requests. - Updated `_add_inflight_ids` to return finished context requests for better state management. - Enhanced `_remove_inflight_ids` to utilize finished context requests from `BatchStatePP`. - Added debug logging for queuing decisions and inflight request management. Signed-off-by: Robin Kobus <[email protected]>
Signed-off-by: Robin Kobus <[email protected]>
- Updated test parameters to include `enable_chunked_prefill` for both synchronous and asynchronous LLM stats tests. - Modified `validate_stats` function to account for chunked prefill behavior in result validation. - Improved test harnesses to handle new parameter and ensure correct behavior with chunked prefill enabled. Signed-off-by: Robin Kobus <[email protected]>
- Introduced new test cases for LLM stats to validate behavior with multiple pipeline parallel configurations. - Added micro batch ID tracking to LLM stats and verify it in the test cases. - Used the new test cases to verify the new pipeline parallel mode behavior with chunked prefill enabled. Signed-off-by: Robin Kobus <[email protected]>
68847d1 to
f25c023
Compare
|
|
/bot run |
|
PR_Github #25418 [ run ] triggered by Bot. Commit: |
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Benchmark
Somewhat artificial benchmark to show the benefits:
Test Coverage
llm_get_stats_test_harnessto include chunked prefill and pipeline parallelism support.PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.