Suppress noisy _extra_state warnings during checkpoint loading#2689
Open
Suppress noisy _extra_state warnings during checkpoint loading#2689
Conversation
Signed-off-by: Chen Cui <chcui@nvidia.com> Made-with: Cursor
Contributor
📝 WalkthroughWalkthroughModified the strict-loading fallback logic in Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes 🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Contributor
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/megatron/bridge/training/checkpointing.py`:
- Around line 1369-1374: Add two explicit unit tests for the strict-load warning
behavior in the checkpointing tests: one test that simulates a load_return with
only keys ending in "._extra_state" and asserts that print_rank_0 is not called
for the warning, and a second test that simulates a mixed missing/unexpected
list (some keys with "._extra_state" and some without) and asserts print_rank_0
is called with the filtered non-extra keys list (matching the non_extra logic).
In each test, stub/mock the object that returns load_return with
missing_keys/unexpected_keys, invoke the strict load path that triggers the
exception handling using load_return and e (so the code path that computes
non_extra and calls print_rank_0), and assert on the exact printed message
contents referencing the filtered key list rather than a generic call-only
check. Ensure you reference print_rank_0 and keys ending with "._extra_state" in
assertions so regressions are caught.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f8339e5b-3e39-4333-8f25-ff4a28e00691
📒 Files selected for processing (1)
src/megatron/bridge/training/checkpointing.py
Contributor
Author
|
/ok to test b48bb8c |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Suppress spurious warnings from
_load_model_state_dictwhen the only mismatched keys are TransformerEngine._extra_stateentries.Changelog
._extra_statekeys when reporting mismatched keys during strict checkpoint loading fallbackGitHub Actions CI
See the CI section in the Contributing doc for how to trigger the CI.
A Nvidia developer will need to approve and trigger the CI for external contributors.
Before your PR is "Ready for review"
Pre checks:
If you haven't finished some of the above items you can still open "Draft" PR.
Additional Information
N/A
Made with Cursor
Summary by CodeRabbit