-
Notifications
You must be signed in to change notification settings - Fork 583
[Bugfix] qwen3-vl-235b-w8a8 load weight ERROR when start service #4292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a bugfix for weight loading in quantized qwen3_vl_moe models. It correctly adds prefix mapping logic to find quantization parameters and defines packed modules for this model type. My review focuses on improving the maintainability of the prefix mapping logic. I've suggested a change to avoid duplicating configuration data, which will make the code clearer and less prone to future errors.
| if model_type == "qwen3_vl_moe": | ||
| hf_to_vllm_mapper = WeightsMapper( | ||
| orig_to_new_prefix={ | ||
| "visual.": "model.visual.", | ||
| "language_model.lm_head.": "lm_head.", | ||
| "language_model.model.": "model.language_model.", | ||
| }) | ||
| prefix = hf_to_vllm_mapper._map_name(prefix) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation for mapping vLLM prefixes to HuggingFace prefixes for qwen3_vl_moe duplicates configuration by defining an inverted mapping dictionary. This is confusing and error-prone if the original mapping changes.
To improve maintainability and clarity, I suggest using the same dictionary structure as in the model definition (HF-to-vLLM) and then using the _reverse_map_name method to perform the required vLLM-to-HF conversion. This makes the code's intent clearer and avoids maintaining two separate, inverted dictionaries.
| if model_type == "qwen3_vl_moe": | |
| hf_to_vllm_mapper = WeightsMapper( | |
| orig_to_new_prefix={ | |
| "visual.": "model.visual.", | |
| "language_model.lm_head.": "lm_head.", | |
| "language_model.model.": "model.language_model.", | |
| }) | |
| prefix = hf_to_vllm_mapper._map_name(prefix) | |
| if model_type == "qwen3_vl_moe": | |
| hf_to_vllm_mapper = WeightsMapper( | |
| orig_to_new_prefix={ | |
| "model.visual.": "visual.", | |
| "lm_head.": "language_model.lm_head.", | |
| "model.language_model.": "language_model.model.", | |
| }) | |
| prefix = hf_to_vllm_mapper._reverse_map_name(prefix) |
Signed-off-by: Levi-JQ <[email protected]>
b295737 to
464a4b6
Compare
What this PR does / why we need it?
fix qwen3-v-w8a8 load weight ERROR when start service
Does this PR introduce any user-facing change?
How was this patch tested?