feat: CUDA graph support for packed sequence (variable-length) training#3869
Draft
seonjinn wants to merge 2 commits intoNVIDIA:mainfrom
Draft
feat: CUDA graph support for packed sequence (variable-length) training#3869seonjinn wants to merge 2 commits intoNVIDIA:mainfrom
seonjinn wants to merge 2 commits intoNVIDIA:mainfrom
Conversation
Contributor
|
This PR has been automatically converted to draft because all PRs must start as drafts. When you are ready for review, click Ready for Review to begin the review process. This will:
See the contribution guide for more details. |
Enable CUDA graph capture/replay for packed sequence (SFT) training
with Mamba-Transformer hybrid models.
## Problem
CUDA graphs require fixed-shape tensor inputs, but packed sequences
have a variable number of documents per micro-batch, so cu_seqlens
varies in length. This is incompatible with CUDA graph capture.
## Solution
Pad cu_seqlens to a configurable fixed size for CUDA graph replay.
If a batch exceeds this size, fall back to eager forward. This gives
CG benefits for most batches while maintaining correctness for all.
## Key Changes
- PackedSeqParams: cu_seqlens padding, shared CG buffers across
layers, dummy PSP for graph capture
- TransformerLayer: CG capture/replay with fallback for attention
- MambaLayer: CG capture/replay with pre-computed seq_idx
- MambaMixer: Avoid dynamic allocations inside CG (seq_idx reuse,
output_size parameter to avoid GPU->CPU sync)
- pretrain_mamba: cu_seqlens padding in get_batch()
- New arg: --cuda-graph-max-packed-seqs
- te_patches/: Patch TE context_parallel to avoid GPU->CPU sync
during CUDA graph capture (applied via PYTHONPATH import hook)
## Usage
### 1. Training script arguments
--cuda-graph-impl transformer_engine \
--cuda-graph-scope mamba attn \
--cuda-graph-max-packed-seqs <MAX_SEQS> \
MAX_SEQS controls the fixed cu_seqlens size:
- Smaller value = less flash_attn padding overhead, more fallbacks
- Larger value = fewer fallbacks, more padding overhead
- Set based on your dataset's N_docs distribution (e.g., P90 or P99)
### 2. Apply TE context_parallel patch
export PYTHONPATH=<repo_root>/te_patches:${PYTHONPATH}
Required to avoid GPU->CPU sync errors during CG capture.
### 3. Example
export PYTHONPATH=/path/to/Megatron-LM/te_patches:${PYTHONPATH}
torchrun pretrain_mamba.py \
--sft \
--cuda-graph-impl transformer_engine \
--cuda-graph-scope mamba attn \
--cuda-graph-max-packed-seqs 64 \
--context-parallel-size 32 \
--tensor-model-parallel-size 8 \
--expert-model-parallel-size 64 \
...
### 4. Choosing MAX_SEQS
Analyze your dataset's packed sequence distribution:
- Example: P50=12, P90=43, P99=106, max=358
- 64 covers ~96% of batches
- 106 covers ~99% but more padding overhead
- Setting to max covers 100% but padding overhead may
outweigh CG benefit
Signed-off-by: Seonjin Na <sna@nvidia.com>
61d6cbb to
9b5382b
Compare
When cu_seqlens is CG-padded, the last entry exceeds total_tokens (CP-local). Skip seq_idx computation entirely — in CG mode, mamba_layer.py manages seq_idx via shared CG buffers in _te_cuda_graph_replay. For non-CG (unpadded cu_seqlens), __post_init__ works as before. Signed-off-by: Seonjin Na <sna@nvidia.com>
9b5382b to
f12d221
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Enable CUDA graph capture/replay for packed sequence (SFT) training
with Mamba-Transformer hybrid models.
Problem
CUDA graphs require fixed-shape tensor inputs, but packed sequences
have a variable number of documents per micro-batch, so cu_seqlens
varies in length. This is incompatible with CUDA graph capture.
Solution
Pad cu_seqlens to a configurable fixed size for CUDA graph replay.
If a batch exceeds this size, fall back to eager forward. This gives
CG benefits for most batches while maintaining correctness for all.
Key Changes
layers, dummy PSP for graph capture
output_size parameter to avoid GPU->CPU sync)
during CUDA graph capture (applied via PYTHONPATH import hook)
Usage
1. Training script arguments
--cuda-graph-impl transformer_engine
--cuda-graph-scope mamba attn
--cuda-graph-max-packed-seqs <MAX_SEQS> \
MAX_SEQS controls the fixed cu_seqlens size:
2. Apply TE context_parallel patch
export PYTHONPATH=<repo_root>/te_patches:${PYTHONPATH}
Required to avoid GPU->CPU sync errors during CG capture.
3. Example
export PYTHONPATH=/path/to/Megatron-LM/te_patches:${PYTHONPATH}
torchrun pretrain_mamba.py
--sft
--cuda-graph-impl transformer_engine
--cuda-graph-scope mamba attn
--cuda-graph-max-packed-seqs 64
--context-parallel-size 32
--tensor-model-parallel-size 8
--expert-model-parallel-size 64
...
4. Choosing MAX_SEQS
Analyze your dataset's packed sequence distribution:
outweigh CG benefit
What does this PR do ?
Contribution process
Pre-checks
Code review
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.
Step 1: Mark PR as "Ready for Review"
.github/CODEOWNERS.Final Review might get declined if these requirements are not fulfilled.
Step 2: Final Review
For PRs that change
megatron/core, once all expert reviewers have approved, theFinal Reviewlabel is applied automatically and final reviewers are assigned.For PRs outside
megatron/core, this step is skipped.Step 3: Approved
Once all required reviewers have approved, the
Approvedlabel is applied automatically.Merge
Any member of mcore-engineers will be able to merge your PR.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.