Skip to content

Conversation

@Funatiq
Copy link
Collaborator

@Funatiq Funatiq commented Nov 19, 2025

Summary by CodeRabbit

  • Bug Fixes
    • Improved batch request tracking and processing to correctly manage context and generation request flows in pipeline execution.
    • Enhanced concurrent request capacity calculations to properly account for distributed processing configuration.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

  • For context chunks there is no dependency on the results of the last pipeline rank, so they can be scheduled in each iteration.
  • To achieve this context requests that are chunking are not added to inflight set, so they are scheduled in the next micro batch.
  • Context requests that reach the last context chunk are added to inflight set, so they are not scheduled in the next micro batch and generation can run without overlap.

Benchmark

Somewhat artificial benchmark to show the benefits:

  • Hardware: 2xA100 PCIe with pipeline parallelism (PP=2)
  • Model: Qwen3-0.6B
  • Dataset: 256 requests, ISL=8K, OSL=1
  • trtllm-bench with concurrency=1
Backend Branch max_num_tokens Req/s Out tok/s Total tok/s Avg Latency (ms)
pytorch main 32768 (no chunking) 7.80 7.80 63929.70 128.12
pytorch overlap 32768 (no chunking) 7.81 7.81 63974.95 128.03
pytorch main 2048 (chunking) 6.86 6.86 56213.56 145.71
pytorch overlap 2048 (chunking) 9.98 9.98 81730.08 100.21
tensorrt main 32768 (no chunking) 6.97 6.97 57126.19 143.38
tensorrt overlap 32768 (no chunking) 7.14 7.14 58478.51 140.07
tensorrt main 2048 (chunking) 4.84 4.84 39644.04 206.63
tensorrt overlap 2048 (chunking) 6.87 6.87 56272.15 145.56

Test Coverage

  • Updated the llm_get_stats_test_harness to include chunked prefill and pipeline parallelism support.
  • Added micro batch ID tracking to verify the new pipeline parallel mode behavior with chunked prefill enabled.
  • Added test cases for PP size 2 and 4.

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Funatiq Funatiq changed the title Dev/feat/overlap ctx chunks [TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode Nov 19, 2025
@Funatiq
Copy link
Collaborator Author

Funatiq commented Nov 19, 2025

/bot run --stage-list "DGX_H100-2_GPUs-PyTorch-Others-1, DGX_H100-4_GPUs-CPP-1, DGX_H100-4_GPUs-PyTorch-DeepSeek-1, DGX_H100-4_GPUs-PyTorch-Others-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25072 [ run ] triggered by Bot. Commit: 4d9fc3c

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25072 [ run ] completed with state SUCCESS. Commit: 4d9fc3c
/LLM/main/L0_MergeRequest_PR pipeline #18952 (Partly Tested) completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@Funatiq
Copy link
Collaborator Author

Funatiq commented Nov 20, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25199 [ run ] triggered by Bot. Commit: 4d9fc3c

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25199 [ run ] completed with state SUCCESS. Commit: 4d9fc3c
/LLM/main/L0_MergeRequest_PR pipeline #19054 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Nov 20, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25221 [ run ] triggered by Bot. Commit: 4d9fc3c

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25221 [ run ] completed with state SUCCESS. Commit: 4d9fc3c
/LLM/main/L0_MergeRequest_PR pipeline #19076 completed with status: 'FAILURE'

@Funatiq Funatiq force-pushed the dev/feat/overlap_ctx_chunks branch 2 times, most recently from 839e731 to 767d19d Compare November 21, 2025 08:40
@Funatiq
Copy link
Collaborator Author

Funatiq commented Nov 21, 2025

/bot run

@Funatiq Funatiq marked this pull request as ready for review November 21, 2025 08:41
@Funatiq Funatiq requested a review from a team as a code owner November 21, 2025 08:41
@tensorrt-cicd
Copy link
Collaborator

PR_Github #25338 [ run ] triggered by Bot. Commit: 767d19d

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 21, 2025

📝 Walkthrough

Walkthrough

This PR modifies in-flight request tracking and pause logic across multiple components. Changes include capturing and logging pause operation return values, refactoring context and generation request handling into separate loops, adjusting sequence calculation metrics, and restructuring batch state management to explicitly track finished context requests through the PyExecutor pipeline.

Changes

Cohort / File(s) Summary
C++ pause and batch handling
cpp/tensorrt_llm/batch_manager/pauseRequests.cpp, cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
pauseRequests: Captured and logged the return value (count of erased entries) from inflightReqIds.erase(). trtGptModelInflightBatching: Removed pause call for contextRequests in forwardSync; refactored forwardAsync to iterate contextRequests and generationRequests separately with added logging for each cohort, aligning with chunking behavior.
Python sequence and max calculations
tensorrt_llm/_torch/pyexecutor/_util.py
Introduced max_num_sequences calculated as max_batch_size multiplied by pp_size and used it in log messages as max_num_requests. Removed redundant redefinition later in create_py_executor_instance, ensuring SeqSlotManager consistently uses the computed value.
Python batch state and pipeline tracking
tensorrt_llm/_torch/pyexecutor/py_executor.py
Added finished_ctx_reqs field to BatchStatePP to track completed context requests per microbatch. Updated _add_inflight_ids to collect and return finished context requests. Changed _remove_inflight_ids signature to accept BatchStatePP instead of scheduled_requests. Updated _executor_loop_pp to capture finished_ctx_reqs and pass it through batch state. Modified batch finalization to reset context_requests to finished_ctx_reqs and adjusted inflight ID removal logic.

Sequence Diagram(s)

sequenceDiagram
    participant Pipeline as PyExecutor Pipeline
    participant Add as _add_inflight_ids
    participant BatchState as BatchStatePP
    participant Remove as _remove_inflight_ids
    participant Finalize as Batch Finalization

    Pipeline->>Add: Call with current requests
    Add->>Add: Collect finished_ctx_reqs from context
    Add->>Add: Insert all requests into inflight tracking
    Add-->>Pipeline: Return finished_ctx_reqs
    Pipeline->>BatchState: Create new BatchStatePP(finished_ctx_reqs)
    Pipeline->>Pipeline: Process microbatch
    Pipeline->>Finalize: Finalize previous batch
    Finalize->>Finalize: Reset context_requests to<br/>finished_ctx_reqs
    Finalize->>Remove: Call with previous_batch (BatchStatePP)
    Remove->>Remove: Use batch_state.finished_ctx_reqs<br/>+ scheduled_requests
    Remove->>Remove: Remove from inflight tracking
    Finalize-->>Pipeline: Batch complete
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • py_executor.py: Significant refactoring of batch state lifecycle and in-flight tracking logic. Requires careful review of the new finished_ctx_reqs flow from collection through finalization, especially the interaction with batch state transitions and scheduled request management.
  • trtGptModelInflightBatching.cpp: Changes to which requests are paused and refactoring of separate iteration loops. Need to verify the pause removal doesn't create request tracking inconsistencies and that separate loop handling preserves intended batching semantics.
  • pauseRequests.cpp & _util.py: Lower complexity individually, but need to verify integration with broader tracking changes.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly identifies the main change: enabling context chunk overlap in pipeline parallel mode, which is the core feature described throughout the PR.
Description check ✅ Passed PR description is comprehensive with clear technical explanation, detailed benchmark results, and test coverage information. All major sections are present and properly filled out.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)

106-109: Clarify finished_ctx_reqs semantics in BatchStatePP

The extra field is straightforward, but its meaning (“subset of scheduled_ctx_reqs that hit the last context chunk in this microbatch”) is non‑obvious from the type alone. A brief comment here (or at the construction site in _executor_loop_pp) would make the PP bookkeeping easier to follow and reduce the risk of future misuse.

Also, if you ever construct BatchStatePP outside _executor_loop_pp, consider a default_factory=list instead of None for finished_ctx_reqs to avoid accidental None iteration later. Right now all call sites pass an explicit list, so this is purely defensive.


2488-2502: Inflight set handling for context chunks and generations looks right; consider documenting invariants

The revised inflight logic:

  • In _add_inflight_ids:
    • Only context requests with req.is_last_context_chunk are inserted into self.inflight_req_ids and collected into finished_ctx_reqs.
    • All generation requests are still inserted.
  • In _remove_inflight_ids:
    • Context IDs are erased from self.inflight_req_ids using batch_state.finished_ctx_reqs.
    • Generation IDs are erased using batch_state.sample_state.scheduled_requests.generation_requests.

This gives a clean lifecycle:

  1. Non‑final context chunks are never in the inflight set → scheduler can keep scheduling further context chunks while earlier ones are in the PP pipeline.
  2. Final context chunk and generation requests are marked inflight at queue time.
  3. Once PP communication and response handling for that microbatch complete, they are removed via _remove_inflight_ids.

Mechanically this looks correct and aligns with the C++ changes described in the PR. Given how easy it is to break this invariant later, I’d suggest:

  • Adding a brief docstring note that finished_ctx_reqs must exactly mirror the context IDs inserted here, and that non‑final context chunks are intentionally excluded from the inflight set.
  • Optionally, clarifying whether request_id vs py_request_id is the canonical ID for ReqIdsSet, since logs and other code paths mostly use py_request_id. Even a one‑line comment at the ReqIdsSet initialization would avoid confusion.

Also applies to: 2503-2514

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b6483ef and 767d19d.

📒 Files selected for processing (4)
  • cpp/tensorrt_llm/batch_manager/pauseRequests.cpp (1 hunks)
  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/_util.py (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (6 hunks)
🧰 Additional context used
🧠 Learnings (7)
📚 Learning: 2025-08-20T06:56:02.889Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:577-579
Timestamp: 2025-08-20T06:56:02.889Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, maxSequenceLength is now enforced as a non-optional argument in the BlockManager constructor, so concerns about std::nullopt defaulting to 0 are not applicable. When windowSize > maxSequenceLength, a warning should be added instead of handling optional parameter cases.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/_util.py
  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
📚 Learning: 2025-08-26T06:07:02.166Z
Learnt from: shaharmor98
Repo: NVIDIA/TensorRT-LLM PR: 7231
File: tensorrt_llm/_torch/pyexecutor/_util.py:504-509
Timestamp: 2025-08-26T06:07:02.166Z
Learning: In tensorrt_llm/_torch/pyexecutor/_util.py, when calling model_engine.set_lora_model_config(), pass model_binding_config.mlp_hidden_size directly without multiplying by mapping.tp_size, as the mlp_hidden_size from get_bindings_model_config() is already the per-TP rank value needed for LoRA weight packaging.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/_util.py
📚 Learning: 2025-08-21T09:41:49.347Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:2010-2045
Timestamp: 2025-08-21T09:41:49.347Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is specifically for updating bookkeeping when blocks are added during the context phase, not for refreshing offsets after detach operations. During detach operations, GenerationRequest::removeFrontBlock handles the necessary cache block bookkeeping internally.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/pauseRequests.cpp
  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
📚 Learning: 2025-08-20T06:48:45.368Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h:0-0
Timestamp: 2025-08-20T06:48:45.368Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is only called when adding a sequence, not during detach operations. During detach, the cache block bookkeeping is handled by GenerationRequest::removeFrontBlock.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/pauseRequests.cpp
  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
📚 Learning: 2025-08-06T08:18:28.669Z
Learnt from: zhengd-nv
Repo: NVIDIA/TensorRT-LLM PR: 6633
File: cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp:145-155
Timestamp: 2025-08-06T08:18:28.669Z
Learning: In cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp, the existing `mMtxForMap` mutex in DataSenderImpl is sufficient to synchronize measurement file operations in the `release` method, as all file operations occur within the same critical section that protects the `mRequestToSession` map access.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/pauseRequests.cpp
  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
📚 Learning: 2025-08-15T06:46:54.897Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:54.897Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp addToken function, newly allocated blocks are unshared by design. The beam search path in addToken (when sequence.getNumTokens() > windowSize) is currently broken/non-functional with SWA, so the block allocation doesn't follow a shared-then-unshared pattern.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
🧬 Code graph analysis (2)
tensorrt_llm/_torch/pyexecutor/_util.py (3)
tests/unittest/llmapi/apps/_test_openai_misc.py (2)
  • max_batch_size (30-31)
  • max_seq_len (37-38)
tensorrt_llm/_torch/models/checkpoints/base_weight_mapper.py (1)
  • mapping (152-153)
tensorrt_llm/_torch/distributed/communicator.py (1)
  • pp_size (59-60)
tensorrt_llm/_torch/pyexecutor/py_executor.py (3)
tensorrt_llm/_torch/pyexecutor/llm_request.py (1)
  • LlmRequest (437-662)
tensorrt_llm/logger.py (1)
  • debug (144-145)
tests/unittest/_torch/sampler/test_torch_sampler.py (4)
  • context_requests (436-447)
  • context_requests (1007-1009)
  • generation_requests (450-460)
  • generation_requests (1012-1014)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (4)
tensorrt_llm/_torch/pyexecutor/_util.py (1)

677-680: LGTM! Calculation correctly scales max_num_sequences for pipeline parallelism.

The computation max_num_sequences = max_batch_size * mapping.pp_size appropriately accounts for the fact that each pipeline stage can process its own batch of requests. This aligns with the PR's goal of enabling context chunk overlap in pipeline parallel mode and is consistent with the usage in create_torch_sampler_args (line 828).

cpp/tensorrt_llm/batch_manager/pauseRequests.cpp (1)

53-54: LGTM! Useful logging enhancement for inflight tracking.

Capturing and logging the return value of erase() provides helpful visibility into whether a request was actually present in the inflight set when removed. This observability improvement aligns well with the PR's changes to inflight request tracking.

cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (1)

1052-1068: LGTM! Core chunking overlap logic correctly implemented.

The restructured inflight tracking properly implements the PR's objective:

  1. Context requests: Only added to inflight when isLastContextChunk() is true, allowing non-final chunks to be rescheduled in subsequent micro-batches.
  2. Generation requests: Always added to inflight to prevent concurrent scheduling.

The split into separate loops with explicit logging makes the behavior clear and aligns with the documented performance improvements for chunked context processing in pipeline parallel mode.

tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

837-844: finished_ctx_reqs propagation through PP loop is coherent but subtly changes who is “scheduled”

The new flow:

  • On queue: log queueability, call _add_inflight_ids and capture finished_ctx_reqs.
  • Build BatchStatePP with both scheduled_ctx_reqs (full ctx list) and finished_ctx_reqs (last‑chunk subset).
  • In Stage 3, overwrite sample_state.scheduled_requests.context_requests with previous_batch.finished_ctx_reqs before:
    • _update_requests(previous_batch.sample_state),
    • _handle_logits_communication(previous_batch, ...),
    • resource_manager.update_resources(previous_scheduled_batch, ...),
    • _process_iter_stats(...) (via batch_state.sample_state.scheduled_requests),
    • and finally _remove_inflight_ids(previous_batch).

This matches the PR intent: only context requests that have reached their last chunk are treated as “inflight” at the decoder level and then removed once PP communication completes, while intermediate chunks stay out of the inflight set and can be rescheduled.

The trade‑off is that for context‑chunked requests, intermediate chunks are now invisible to:

  • per‑iteration RequestStats.scheduled computation, and
  • resource_manager.update_resources (only the last chunk for a given request is seen there).

If that’s intentional (i.e., stats and resource updates are supposed to be per‑request, not per‑chunk), it would be worth adding a short comment around the reassignment to make this clear to future readers; otherwise, you may want to reconsider whether some parts (e.g., resource updates or stats) should still see the full scheduled_ctx_reqs instead of only finished_ctx_reqs.

Also applies to: 897-904, 956-960, 992-993

Copy link
Collaborator

@pcastonguay pcastonguay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. Is there a test we could write to verify the changes work as expected? For example, check the number of iterations required to process the request.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25338 [ run ] completed with state SUCCESS. Commit: 767d19d
/LLM/main/L0_MergeRequest_PR pipeline #19165 completed with status: 'FAILURE'

Copy link
Collaborator

@achartier achartier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

- For context chunks there is no dependency on the results of the last pipeline rank, so they can be scheduled in each iteration.
- To achieve this context requests that are chunking are not added to inflight set, so they are scheduled in the next micro batch.
- Context requests that reach the last context chunk are added to inflight set, so they are not scheduled in the next micro batch and generation can run without overlap.
- Enhanced logging for inflight set management.

Signed-off-by: Robin Kobus <[email protected]>
- Added `finished_ctx_reqs` to `BatchStatePP` to track completed context requests.
- Updated `_add_inflight_ids` to return finished context requests for better state management.
- Enhanced `_remove_inflight_ids` to utilize finished context requests from `BatchStatePP`.
- Added debug logging for queuing decisions and inflight request management.

Signed-off-by: Robin Kobus <[email protected]>
- Updated test parameters to include `enable_chunked_prefill` for both synchronous and asynchronous LLM stats tests.
- Modified `validate_stats` function to account for chunked prefill behavior in result validation.
- Improved test harnesses to handle new parameter and ensure correct behavior with chunked prefill enabled.

Signed-off-by: Robin Kobus <[email protected]>
- Introduced new test cases for LLM stats to validate behavior with multiple pipeline parallel configurations.
- Added micro batch ID tracking to LLM stats and verify it in the test cases.
- Used the new test cases to verify the new pipeline parallel mode behavior with chunked prefill enabled.

Signed-off-by: Robin Kobus <[email protected]>
@Funatiq Funatiq force-pushed the dev/feat/overlap_ctx_chunks branch from 68847d1 to f25c023 Compare November 22, 2025 11:13
@Funatiq
Copy link
Collaborator Author

Funatiq commented Nov 22, 2025

lgtm. Is there a test we could write to verify the changes work as expected? For example, check the number of iterations required to process the request.

  • Updated the llm_get_stats_test_harness to include chunked prefill and pipeline parallelism support.
  • Added micro batch ID tracking to verify the new pipeline parallel mode behavior with chunked prefill enabled.
  • Added test cases for PP size 2 and 4.

@Funatiq
Copy link
Collaborator Author

Funatiq commented Nov 22, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25418 [ run ] triggered by Bot. Commit: f25c023

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants