diff --git a/docs/source/tutorials/Qwen3-Coder-30B-A3B.md b/docs/source/tutorials/Qwen3-Coder-30B-A3B.md new file mode 100644 index 00000000000..b5df78d8a4d --- /dev/null +++ b/docs/source/tutorials/Qwen3-Coder-30B-A3B.md @@ -0,0 +1,83 @@ +# Qwen3-Coder-30B-A3B + +## Introduction + +The newly released Qwen3-Coder-30B-A3B employs a sparse MoE architecture for efficient training and inference, delivering significant optimizations in agentic coding, extended context support of up to 1M tokens, and versatile function calling. + +This document will show the main verification steps of the model, including supported features, feature configuration, environment preparation, single-node and multi-node deployment, accuracy and performance evaluation. + +## Supported Features + +Refer to [supported features](../user_guide/support_matrix/supported_models.md) to get the model's supported feature matrix. + +Refer to [feature guide](../user_guide/feature_guide/index.md) to get the feature's configuration. + +## Environment Preparation + +### Model Weight + +- `Qwen3-Coder-30B-A3B-Instruct`(BF16 version): require 1 Atlas 800 A3 (64G × 16) nodes or 1 Atlas 800 A2 (64G/32G × 8) nodes. [Download model weight](https://modelers.cn/models/Modelers_Park/Qwen3-Coder-30B-A3B-Instruct) + +It is recommended to download the model weight to the shared directory of multiple nodes, such as `/root/.cache/` + +### Installation + +You can using our official docker image to run `Qwen3-Coder-30B-A3B-Instruct` directly. + +- Start the docker image on your node, refer to [using docker](../installation.md#set-up-using-docker). + + +In addition, if you don't want to use the docker image as above, you can also build all from source: + +- Install `vllm-ascend` from source, refer to [installation](../installation.md). + + +## Deployment + +### Single-node Deployment + +Run the following script to execute online inference. + +For an Atlas A2 with 64 GB of NPU card memory, tensor-parallel-size should be at least 2, and for 32 GB of memory, tensor-parallel-size should be at least 4. + +```shell +#!/bin/sh +export VLLM_USE_MODELSCOPE=true + +vllm serve Qwen/Qwen3-Coder-30B-A3B-Instruct --served-model-name qwen3-coder --tensor-parallel-size 4 --enable_expert_parallel +``` + +## Functional Verification + +Once your server is started, you can query the model with input prompts: + +```shell +curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{ + "model": "qwen3-coder", + "messages": [ + {"role": "user", "content": "Give me a short introduction to large language models."} + ], + "temperature": 0.6, + "top_p": 0.95, + "top_k": 20, + "max_tokens": 4096 +}' +``` + +## Accuracy Evaluation + +### Using AISBench + +1. Refer to [Using AISBench](../developer_guide/evaluation/using_ais_bench.md) for details. + +2. After execution, you can get the result, here is the result of `Qwen3-Coder-30B-A3B-Instruct` in `vllm-ascend:0.11.0rc0` for reference only. + +| dataset | version | metric | mode | vllm-api-general-chat | +|----- | ----- | ----- | ----- | -----| +| openai_humaneval | f4a973 | humaneval_pass@1 | gen | 94.51 | + +## Performance + +### Using AISBench + +Refer to [Using AISBench for performance evaluation](../developer_guide/evaluation/using_ais_bench.md#execute-performance-evaluation) for details. \ No newline at end of file diff --git a/docs/source/tutorials/index.md b/docs/source/tutorials/index.md index e5be4aa7da4..bceacc419d7 100644 --- a/docs/source/tutorials/index.md +++ b/docs/source/tutorials/index.md @@ -16,6 +16,7 @@ multi_npu_qwen3_moe multi_npu_quantization single_node_300i DeepSeek-V3.2-Exp.md +Qwen3-Coder-30B-A3B.md multi_node multi_node_kimi multi_node_qwen3vl diff --git a/docs/source/user_guide/support_matrix/supported_models.md b/docs/source/user_guide/support_matrix/supported_models.md index a9260120cc4..60987e330b3 100644 --- a/docs/source/user_guide/support_matrix/supported_models.md +++ b/docs/source/user_guide/support_matrix/supported_models.md @@ -14,7 +14,7 @@ Get the latest info here: https://github.com/vllm-project/vllm-ascend/issues/160 | DeepSeek Distill (Qwen/Llama) | ✅ | ||||||||||||||||||| | Qwen3 | ✅ | ||||||||||||||||||| | Qwen3-based | ✅ | ||||||||||||||||||| -| Qwen3-Coder | ✅ | ||||||||||||||||||| +| Qwen3-Coder | ✅ | |✅|✅||✅|✅|✅|||✅|✅|✅|✅||||||[Qwen3-Coder-30B-A3B tutorial](../../tutorials/Qwen3-Coder-30B-A3B.md)| | Qwen3-Moe | ✅ | ||||||||||||||||||| | Qwen3-Next | ✅ | ||||||||||||||||||| | Qwen2.5 | ✅ | |||||||||||||||||||