Skip to content

Conversation

@Levi-JQ
Copy link
Contributor

@Levi-JQ Levi-JQ commented Nov 20, 2025

What this PR does / why we need it?

fix qwen3-v-w8a8 load weight ERROR when start service

Does this PR introduce any user-facing change?

How was this patch tested?

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a bugfix for weight loading in quantized qwen3_vl_moe models. It correctly adds prefix mapping logic to find quantization parameters and defines packed modules for this model type. My review focuses on improving the maintainability of the prefix mapping logic. I've suggested a change to avoid duplicating configuration data, which will make the code clearer and less prone to future errors.

Comment on lines +109 to +116
if model_type == "qwen3_vl_moe":
hf_to_vllm_mapper = WeightsMapper(
orig_to_new_prefix={
"visual.": "model.visual.",
"language_model.lm_head.": "lm_head.",
"language_model.model.": "model.language_model.",
})
prefix = hf_to_vllm_mapper._map_name(prefix)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation for mapping vLLM prefixes to HuggingFace prefixes for qwen3_vl_moe duplicates configuration by defining an inverted mapping dictionary. This is confusing and error-prone if the original mapping changes.

To improve maintainability and clarity, I suggest using the same dictionary structure as in the model definition (HF-to-vLLM) and then using the _reverse_map_name method to perform the required vLLM-to-HF conversion. This makes the code's intent clearer and avoids maintaining two separate, inverted dictionaries.

Suggested change
if model_type == "qwen3_vl_moe":
hf_to_vllm_mapper = WeightsMapper(
orig_to_new_prefix={
"visual.": "model.visual.",
"language_model.lm_head.": "lm_head.",
"language_model.model.": "model.language_model.",
})
prefix = hf_to_vllm_mapper._map_name(prefix)
if model_type == "qwen3_vl_moe":
hf_to_vllm_mapper = WeightsMapper(
orig_to_new_prefix={
"model.visual.": "visual.",
"lm_head.": "language_model.lm_head.",
"model.language_model.": "language_model.model.",
})
prefix = hf_to_vllm_mapper._reverse_map_name(prefix)

@Levi-JQ Levi-JQ changed the title [Bugfix] qwen3-v-w8a8 load weight ERROR when start service [Bugfix] qwen3-vl-235b-w8a8 load weight ERROR when start service Nov 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant