-
Notifications
You must be signed in to change notification settings - Fork 583
[Doc] Add tutorial for Qwen3-Coder-30B-A3B #4275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
054cfd4
309368e
0878bc4
af248e4
685bfd9
363406a
32b8cfa
60c37a2
942a67c
b9f5400
1337ed4
802967f
8bf185c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,85 @@ | ||
| # Qwen3-Coder-30B-A3B | ||
|
|
||
| ## Introduction | ||
|
|
||
| The newly released Qwen3-Coder-30B-A3B employs a sparse MoE architecture for efficient training and inference, delivering significant optimizations in agentic coding, extended context support of up to 1M tokens, and versatile function calling. | ||
|
|
||
| This document will show the main verification steps of the model, including supported features, feature configuration, environment preparation, single-node and multi-node deployment, accuracy and performance evaluation. | ||
|
|
||
| ## Supported Features | ||
|
|
||
| Refer to [supported features](../user_guide/support_matrix/supported_models.md) to get the model's supported feature matrix. | ||
|
|
||
| Refer to [feature guide](../user_guide/feature_guide/index.md) to get the feature's configuration. | ||
|
|
||
| ## Environment Preparation | ||
|
|
||
| ### Model Weight | ||
|
|
||
| - `Qwen3-Coder-30B-A3B-Instruct`(BF16 version): require 1 Atlas 800 A3 (64G × 16) nodes or 1 Atlas 800 A2 (64G/32G × 8) nodes. [Download model weight](https://modelers.cn/models/Modelers_Park/Qwen3-Coder-30B-A3B-Instruct) | ||
|
|
||
| It is recommended to download the model weight to the shared directory of multiple nodes, such as `/root/.cache/` | ||
|
|
||
| ### Installation | ||
|
|
||
| You can using our official docker image to run `Qwen3-Coder-30B-A3B-Instruct` directly. | ||
|
|
||
| - Start the docker image on your node, refer to [using docker](../installation.md#set-up-using-docker). | ||
|
|
||
|
|
||
| In addition, if you don't want to use the docker image as above, you can also build all from source: | ||
|
|
||
| - Install `vllm-ascend` from source, refer to [installation](../installation.md). | ||
|
|
||
|
|
||
| ## Deployment | ||
|
|
||
| ### Single-node Deployment | ||
|
|
||
| Run the following script to execute online inference. | ||
|
|
||
| For an Atlas A2 with 64 GB of NPU card memory, tensor-parallel-size should be at least 2, and for 32 GB of memory, tensor-parallel-size should be at least 4. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There's an inconsistency in the hardware description for the Atlas A2. Line 19 states that the Atlas 800 A2 node is equipped with 64G cards. However, this line introduces a recommendation for a configuration with '32 GB of memory'. This discrepancy can cause confusion for users. Please clarify if Atlas A2 variants with 32GB cards exist and are supported, or remove the reference to the 32GB configuration to maintain consistency. |
||
|
|
||
| ```shell | ||
| #!/bin/sh | ||
| export VLLM_USE_MODELSCOPE=true | ||
|
|
||
| vllm serve Qwen/Qwen3-Coder-30B-A3B-Instruct --served-model-name qwen3-coder --tensor-parallel-size 4 --enable_expert_parallel | ||
| ``` | ||
|
|
||
| ## Functional Verification | ||
|
|
||
| Once your server is started, you can query the model with input prompts: | ||
|
|
||
| ```shell | ||
| curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{ | ||
| "model": "qwen3-coder", | ||
| "messages": [ | ||
| {"role": "user", "content": "Give me a short introduction to large language models."} | ||
| ], | ||
| "temperature": 0.6, | ||
| "top_p": 0.95, | ||
| "top_k": 20, | ||
| "max_tokens": 4096 | ||
| }' | ||
| ``` | ||
|
|
||
| ## Accuracy Evaluation | ||
|
|
||
| Here are two accuracy evaluation methods. | ||
|
|
||
| ### Using AISBench | ||
|
|
||
| 1. Refer to [Using AISBench](../developer_guide/evaluation/using_ais_bench.md) for details. | ||
|
|
||
| 2. After execution, you can get the result, here is the result of `Qwen3-Coder-30B-A3B-Instruct` in `vllm-ascend:0.11.0rc0` for reference only. | ||
|
|
||
| | dataset | version | metric | mode | vllm-api-general-chat | | ||
| |----- | ----- | ----- | ----- | -----| | ||
| | openai_humaneval | f4a973 | humaneval_pass@1 | gen | 94.51 | | ||
|
|
||
| ## Performance | ||
|
|
||
| ### Using AISBench | ||
|
|
||
| Refer to [Using AISBench for performance evaluation](../developer_guide/evaluation/using_ais_bench.md#execute-performance-evaluation) for details. | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add this file name to index.md