[bridge] Fix off-by-one in sliding window size for Gemma2, Gemma3, Mistral, and GPT-OSS#2656
Open
[bridge] Fix off-by-one in sliding window size for Gemma2, Gemma3, Mistral, and GPT-OSS#2656
Conversation
…d GPT-OSS HuggingFace sliding_window is inclusive (tokens within window are attended to), while Megatron/FlashAttention window_size is exclusive. Subtract 1 to align semantics. Also make GPT-OSS read sliding_window from the HF config instead of hardcoding 128. Made-with: Cursor
Contributor
📝 WalkthroughWalkthroughThe changes adjust sliding window size calculations in three model bridge implementations. In Gemma2Bridge, Gemma3TEDotProductAttention, and GPT-OSS bridge, window_size is modified to subtract 1 from the computed or configured window dimensions, affecting local attention behavior. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Suggested labels
🚥 Pre-merge checks | ✅ 2 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Contributor
Author
|
/ok to test 237efef |
Contributor
Author
|
/ok to test 871dee9 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Fix off-by-one error in sliding window attention size for Gemma2, Gemma3, Mistral, and GPT-OSS bridges.
FlashAttention's
window_sizetuple(left, right)uses inclusive bounds —(W, 0)attends toW + 1tokens (W preceding + current). Since HuggingFacesliding_windowdefines the total window size (including the current token), we must subtract 1 when converting to the tuple form. This convention is also observed by Transformer Engine.Changelog
sliding_window(was passing raw HF value)window_sizeto tuple at the layer level (was passing raw int)sliding_window(was passing raw HF value)sliding_windowfrom HF config instead of hardcoding 128, and subtract 1(128, 0)to(127, 0)GitHub Actions CI
See the CI section in the Contributing doc for how to trigger the CI.
A Nvidia developer will need to approve and trigger the CI for external contributors.
Before your PR is "Ready for review"
Pre checks:
Additional Information
Thanks to @returnL for catching this in NVIDIA/Megatron-LM#2771