Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
85 changes: 85 additions & 0 deletions docs/source/tutorials/Qwen3-Coder-30B-A3B.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Qwen3-Coder-30B-A3B
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add this file name to index.md


## Introduction

The newly released Qwen3-Coder-30B-A3B employs a sparse MoE architecture for efficient training and inference, delivering significant optimizations in agentic coding, extended context support of up to 1M tokens, and versatile function calling.

This document will show the main verification steps of the model, including supported features, feature configuration, environment preparation, single-node and multi-node deployment, accuracy and performance evaluation.

## Supported Features

Refer to [supported features](../user_guide/support_matrix/supported_models.md) to get the model's supported feature matrix.

Refer to [feature guide](../user_guide/feature_guide/index.md) to get the feature's configuration.

## Environment Preparation

### Model Weight

- `Qwen3-Coder-30B-A3B-Instruct`(BF16 version): require 1 Atlas 800 A3 (64G × 16) nodes or 1 Atlas 800 A2 (64G/32G × 8) nodes. [Download model weight](https://modelers.cn/models/Modelers_Park/Qwen3-Coder-30B-A3B-Instruct)

It is recommended to download the model weight to the shared directory of multiple nodes, such as `/root/.cache/`

### Installation

You can using our official docker image to run `Qwen3-Coder-30B-A3B-Instruct` directly.

- Start the docker image on your node, refer to [using docker](../installation.md#set-up-using-docker).


In addition, if you don't want to use the docker image as above, you can also build all from source:

- Install `vllm-ascend` from source, refer to [installation](../installation.md).


## Deployment

### Single-node Deployment

Run the following script to execute online inference.

For an Atlas A2 with 64 GB of NPU card memory, tensor-parallel-size should be at least 2, and for 32 GB of memory, tensor-parallel-size should be at least 4.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There's an inconsistency in the hardware description for the Atlas A2. Line 19 states that the Atlas 800 A2 node is equipped with 64G cards. However, this line introduces a recommendation for a configuration with '32 GB of memory'. This discrepancy can cause confusion for users. Please clarify if Atlas A2 variants with 32GB cards exist and are supported, or remove the reference to the 32GB configuration to maintain consistency.


```shell
#!/bin/sh
export VLLM_USE_MODELSCOPE=true

vllm serve Qwen/Qwen3-Coder-30B-A3B-Instruct --served-model-name qwen3-coder --tensor-parallel-size 4 --enable_expert_parallel
```

## Functional Verification

Once your server is started, you can query the model with input prompts:

```shell
curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "qwen3-coder",
"messages": [
{"role": "user", "content": "Give me a short introduction to large language models."}
],
"temperature": 0.6,
"top_p": 0.95,
"top_k": 20,
"max_tokens": 4096
}'
```

## Accuracy Evaluation

Here are two accuracy evaluation methods.

### Using AISBench

1. Refer to [Using AISBench](../developer_guide/evaluation/using_ais_bench.md) for details.

2. After execution, you can get the result, here is the result of `Qwen3-Coder-30B-A3B-Instruct` in `vllm-ascend:0.11.0rc0` for reference only.

| dataset | version | metric | mode | vllm-api-general-chat |
|----- | ----- | ----- | ----- | -----|
| openai_humaneval | f4a973 | humaneval_pass@1 | gen | 94.51 |

## Performance

### Using AISBench

Refer to [Using AISBench for performance evaluation](../developer_guide/evaluation/using_ais_bench.md#execute-performance-evaluation) for details.
1 change: 1 addition & 0 deletions docs/source/tutorials/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ multi_npu_qwen3_moe
multi_npu_quantization
single_node_300i
DeepSeek-V3.2-Exp.md
Qwen3-Coder-30B-A3B.md
multi_node
multi_node_kimi
multi_node_qwen3vl
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/support_matrix/supported_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Get the latest info here: https://github.com/vllm-project/vllm-ascend/issues/160
| DeepSeek Distill (Qwen/Llama) | ✅ | |||||||||||||||||||
| Qwen3 | ✅ | |||||||||||||||||||
| Qwen3-based | ✅ | |||||||||||||||||||
| Qwen3-Coder | ✅ | |||||||||||||||||||
| Qwen3-Coder | ✅ | |✅|✅||✅|✅|✅|||✅|✅|✅|✅||||1M||[Qwen3-Coder-30B-A3B tutorial](../../tutorials/Qwen3-Coder-30B-A3B.md)|
| Qwen3-Moe | ✅ | |||||||||||||||||||
| Qwen3-Next | ✅ | |||||||||||||||||||
| Qwen2.5 | ✅ | |||||||||||||||||||
Expand Down
Loading