Skip to content

[bridge] Fix off-by-one in sliding window size for Gemma2, Gemma3, Mistral, and GPT-OSS#2656

Open
cuichenx wants to merge 5 commits intomainfrom
chcui/fix-sliding-window-off-by-one
Open

[bridge] Fix off-by-one in sliding window size for Gemma2, Gemma3, Mistral, and GPT-OSS#2656
cuichenx wants to merge 5 commits intomainfrom
chcui/fix-sliding-window-off-by-one

Conversation

@cuichenx
Copy link
Contributor

@cuichenx cuichenx commented Mar 5, 2026

What does this PR do?

Fix off-by-one error in sliding window attention size for Gemma2, Gemma3, Mistral, and GPT-OSS bridges.

FlashAttention's window_size tuple (left, right) uses inclusive bounds — (W, 0) attends to W + 1 tokens (W preceding + current). Since HuggingFace sliding_window defines the total window size (including the current token), we must subtract 1 when converting to the tuple form. This convention is also observed by Transformer Engine.

Changelog

  • Fix Gemma2 bridge to subtract 1 from sliding_window (was passing raw HF value)
  • Fix Gemma3 provider to subtract 1 when converting int window_size to tuple at the layer level (was passing raw int)
  • Fix Mistral bridge to subtract 1 from sliding_window (was passing raw HF value)
  • Fix GPT-OSS bridge to read sliding_window from HF config instead of hardcoding 128, and subtract 1
  • Fix GPT-OSS predefined provider default from (128, 0) to (127, 0)

GitHub Actions CI

See the CI section in the Contributing doc for how to trigger the CI.
A Nvidia developer will need to approve and trigger the CI for external contributors.

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

Additional Information

Thanks to @returnL for catching this in NVIDIA/Megatron-LM#2771

…d GPT-OSS

HuggingFace sliding_window is inclusive (tokens within window are attended to),
while Megatron/FlashAttention window_size is exclusive. Subtract 1 to align
semantics. Also make GPT-OSS read sliding_window from the HF config instead of
hardcoding 128.

Made-with: Cursor
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 5, 2026

📝 Walkthrough

Walkthrough

The changes adjust sliding window size calculations in three model bridge implementations. In Gemma2Bridge, Gemma3TEDotProductAttention, and GPT-OSS bridge, window_size is modified to subtract 1 from the computed or configured window dimensions, affecting local attention behavior.

Changes

Cohort / File(s) Summary
Gemma Model Bridges
src/megatron/bridge/models/gemma/gemma2_bridge.py, src/megatron/bridge/models/gemma/gemma3_provider.py
Adjusted sliding window size calculations to subtract 1 from configured window dimensions in attention layer configuration.
GPT-OSS Bridge
src/megatron/bridge/models/gpt_oss/gpt_oss_bridge.py
Changed window_size configuration from fixed value to dynamically computed as (sliding_window - 1).

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Suggested labels

r0.3.0

🚥 Pre-merge checks | ✅ 2 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 66.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Test Results For Major Changes ⚠️ Warning PR contains numerical changes affecting sliding window attention computation but lacks actual test results demonstrating no regression. Add actual test results or GitHub Actions CI/CD logs showing successful test execution and logit validation.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main changes: fixing an off-by-one error in sliding window size calculations across multiple model bridges (Gemma2, Gemma3, and GPT-OSS).

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch chcui/fix-sliding-window-off-by-one

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@cuichenx
Copy link
Contributor Author

cuichenx commented Mar 5, 2026

/ok to test 237efef

Signed-off-by: Chen Cui <chcui@nvidia.com>
Signed-off-by: Chen Cui <chcui@nvidia.com>
@cuichenx cuichenx changed the title [bridge] Fix off-by-one in sliding window size for Gemma2, Gemma3, and GPT-OSS [bridge] Fix off-by-one in sliding window size for Gemma2, Gemma3, Mistral, and GPT-OSS Mar 5, 2026
@cuichenx
Copy link
Contributor Author

cuichenx commented Mar 6, 2026

/ok to test 871dee9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant