Skip to content

Conversation

@shengliangxu
Copy link

…coded model.layers

This is change set 1 from working on OMNIML-2917.

When we export quantized model to hf unified format, we hard code modules with a "model.layers" prefix. This is something completely odd and unnecessary. The major problem is that we may output some quant config that have completely wrong prefix, such as in exclude_modules. For example, for the Qwen3-VL models, there are 2 transformer blocks: language_model and vision. Before this change, for language_model, we will output:

model.layers.language_model.layers.0.xxx
model.layers.language_model.layers.1.xxx

The prefixes are completely wrong therefore when inference systems such as vllm try to read the quant config, it will fail.

Fix it by simply use the prefixes from parsing the model itself.

What does this PR do?

Type of change: ?

Overview: ?

Usage

# Add a code snippet demonstrating how to use this

Testing

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes/No
  • Did you write any new necessary tests?: Yes/No
  • Did you add or update any necessary documentation?: Yes/No
  • Did you update Changelog?: Yes/No

Additional Information

@shengliangxu shengliangxu self-assigned this Oct 28, 2025
@copy-pr-bot
Copy link

copy-pr-bot bot commented Oct 28, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@shengliangxu shengliangxu force-pushed the shengliangx/export-with-actual-prefix branch from 2f6e3b6 to 5e6d5d0 Compare October 28, 2025 16:58
…coded model.layers

This is change set 1 from working on OMNIML-2917.

When we export quantized model to hf unified format, we hard code
modules with a "model.layers" prefix. This is something completely odd
and unnecessary. The major problem is that we may output some quant
config that have completely wrong prefix, such as in exclude_modules.
For example, for the Qwen3-VL models, there are 2 transformer blocks:
language_model and vision. Before this change, for language_model, we
will output:

model.layers.language_model.layers.0.xxx
model.layers.language_model.layers.1.xxx

The prefixes are completely wrong therefore when inference systems such
as vllm try to read the quant config, it will fail.

Fix it by simply use the prefixes from parsing the model itself.

Signed-off-by: Shengliang Xu <[email protected]>
@shengliangxu shengliangxu force-pushed the shengliangx/export-with-actual-prefix branch from 5e6d5d0 to a254129 Compare October 28, 2025 17:01
@codecov
Copy link

codecov bot commented Oct 28, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 73.38%. Comparing base (14fa1e5) to head (a254129).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #470   +/-   ##
=======================================
  Coverage   73.38%   73.38%           
=======================================
  Files         180      180           
  Lines       18111    18111           
=======================================
  Hits        13290    13290           
  Misses       4821     4821           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@shengliangxu
Copy link
Author

abandon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant