@@ -135,13 +172,17 @@ cd build/bin
Hi! I'm DeepSeek-R1, an artificial intelligence assistant created by DeepSeek. I'm at your service and would be delighted to assist you with any inquiries or tasks you may have.
```
-### GGUF 基准测试
+### 模型测试
+
+
```bash
./llama-bench -m DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M.gguf
```
-```bash
+
+
+```txt
radxa@orion-o6:~/llama.cpp/build/bin$ ./llama-bench -m ~/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M.gguf -t 8
| model | size | params | backend | threads | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |
diff --git a/docs/common/ai/_ollama.mdx b/docs/common/ai/_ollama.mdx
index 24e105122..e74db61e8 100644
--- a/docs/common/ai/_ollama.mdx
+++ b/docs/common/ai/_ollama.mdx
@@ -1,5 +1,5 @@
-Ollama 是一个用于在本地运行和管理大语言模型(LLM)的工具。
-它可以让你在本地设备上轻松拉取、运行和管理各种 AI 模型,比如 LLaMA、Mistral、Gemma 等,无需复杂的环境配置。
+Ollama 是一款高效的本地大语言模型(LLM)管理与运行工具。
+它极大地简化了 AI 模型的部署流程,让用户无需复杂的环境配置,即可在本地设备上实现模型的一键拉取、运行与统一管理。
## Ollama 安装
@@ -13,7 +13,7 @@ curl -fsSL https://ollama.com/install.sh | sh
### 拉取模型
-此命令会通过互联网下载模型文件
+此命令会通过互联网下载模型文件。
```bash
ollama pull deepseek-r1:1.5b
@@ -21,7 +21,7 @@ ollama pull deepseek-r1:1.5b
### 运行模型
-此命令会直接运行模型,如本地没有模型缓存会自动通过互联网下载模型文件并运行
+此命令会直接启动模型,若无本地缓存,则自动联网下载并运行。
```bash
ollama run deepseek-r1:1.5b
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_API-manual.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_API-manual.mdx
new file mode 100644
index 000000000..f45c04a79
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_API-manual.mdx
@@ -0,0 +1,5 @@
+NPU 驱动对上层开放的 API 接口分为 C++ 和 Python 两部分。
+
+详细的 API 手册可在[此芯开发者中心](https://developer.cixtech.com/)下载。
+
+下拉找到文档资源,点击 AI 一栏的下载即可。
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_BEV_RoadSeg.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_BEV_RoadSeg.mdx
new file mode 100644
index 000000000..cf7dfde05
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_BEV_RoadSeg.mdx
@@ -0,0 +1,4 @@
+BEV_RoadSeg 是一个专注于自动驾驶可行驶空间感知的专用系统。它创新性地融合鸟瞰图变换与 Transformer 架构,通过 LSTR 深度学习模型对道路结构进行精准分割,从而在复杂的动态行车环境中实现稳定、可靠的车道与可行驶区域识别。
+
+- 核心功能:基于多摄像头环视输入,生成鸟瞰视角下的高精度可行驶区域与车道线分割图,为路径规划提供关键感知依据。
+- 技术特点:采用 LSTR 模型作为核心,利用 Transformer 对长距离空间关系的强大建模能力,有效应对弯道、岔路口及部分遮挡等挑战性场景。
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_ai-hub.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_ai-hub.mdx
index 8f5ecbc4a..78e7278a2 100644
--- a/docs/common/orion-common/app-dev/artificial-intelligence/_ai-hub.mdx
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_ai-hub.mdx
@@ -1,16 +1,47 @@
-CIX AI Model Hub 是针对在 CIX SOC 上部署进行了优化的机器学习模型的集合。里面包含了多种领域的 AI 模型例子(例如计算机视觉,语音识别,生成式 AI 等多种开源模型)与针对 CIX SOC NPU 编译的配置文件。这里主要介绍 AI Model Hub 下载及运行其中的模型。
+CIX AI Model Hub 仓库托管在魔搭社区上,访问地址 [cix ai_model_hub](https://modelscope.cn/models/cix/ai_model_hub_25_Q3)
-## 下载 CIX AI Model Hub 仓库
+## 克隆目录
-CIX AI Model Hub 仓库托管在魔搭社区上,访问地址 [cix ai_model_hub](https://modelscope.cn/models/cix/ai_model_hub_25_Q3)
+通过下面的命令可以仅克隆非 git lfs 跟踪的文件(请确保安装git lfs):
+
+:::info
+板端克隆目录结构之后,根据是否要进行主机端模型转换决定是否在主机克隆。
+:::
+
+```bash
+GIT_LFS_SKIP_SMUDGE=1 git clone https://www.modelscope.cn/cix/ai_model_hub_25_Q3.git
+```
-可使用 git 下载(请确保安装 git-lfs)
+## 配置板端环境
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
```bash
+python3 -m venv --system-site-packages .venv
+pip3 install -r requirements.txt
+```
+
+
+
+## 下载整个仓库
+
+:::tip[建议]
+由于仓库体积较大,建议避免全量克隆。
+:::
+
+使用 git 下载(请确保安装 git-lfs)
+
+```bash
+mkdir ai-model-hub && cd ai-model-hub
+git lfs install
git clone https://www.modelscope.cn/cix/ai_model_hub_25_Q3.git
```
-模型例子目录
+目录如下:
```bash
.
@@ -68,49 +99,3 @@ git clone https://www.modelscope.cn/cix/ai_model_hub_25_Q3.git
├── text_process.py
└── tools.py
```
-
-## 模型运行
-
-### 配置环境
-
-进入模型目录下
-
-```bash
-cd ai_model_hub_25_Q3
-```
-
-创建 Python 虚拟环境
-
-```bash
-python3 -m venv venv
-```
-
-激活虚拟环境
-
-```bash
-source venv/bin/activate
-```
-
-安装 Python 环境
-
-```bash
-pip3 install -r requirements.txt
-```
-
-### 模型例子
-
-1. 将人类可读的输入预处理为模型输入
-2. 运行模型推理
-3. 将模型输出后处理为人类可读的格式
-
-所有模型示例代码均可在 O6/O6N 上使用 NPU 端到端运行:
-
-```bash
-python3 inference_npu.py
-```
-
-此外,还可以通过 OnnxRuntime 在 X86 主机上或 O6/O6N上使用 CPU 本地运行端到端示例
-
-```bash
-python3 inference_onnx.py
-```
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_clip.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_clip.mdx
new file mode 100644
index 000000000..747c4f6be
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_clip.mdx
@@ -0,0 +1,228 @@
+**CLIP** 是由 OpenAI 开发的通用多模态预训练模型。它通过在互联网上收集的数亿对“图像-文本”数据上进行对比学习,打破了传统视觉模型依赖人工手动标注类别的局限,赋予了人工智能通过自然语言直接“理解”视觉世界的能力。
+
+- 核心特点:具备强大的跨模态对齐能力与零样本(Zero-shot)迁移能力,无需针对特定任务微调即可识别从未见过的物体类别。它广泛应用于语义图文检索、自动提示词生成,并作为 Stable Diffusion 等生成式 AI 的核心文本编码器。
+- 版本说明:本案例采用 CLIP-ViT-B/32 模型。作为该系列中兼顾性能与部署效率的基准版本,它采用 Vision Transformer (ViT) 作为视觉主干网络,通过 32x32 的补丁切片处理图像特征。它在保持优秀语义对齐精度的同时,拥有更轻量的参数规模和更快的推理响应速度,是目前多模态应用落地中的主流平衡选择。
+
+:::info[环境配置]
+需要提前配置好相关环境。
+
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
+
+## 快速开始
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/Generative_AI/Image_to_Text/onnx_clip
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Generative_AI/Image_to_Text/onnx_clip/clip_txt.cix
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Generative_AI/Image_to_Text/onnx_clip/clip_visual.cix
+```
+
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/Generative_AI/Image_to_Text/onnx_clip/model
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Generative_AI/Image_to_Text/onnx_clip/model/clip_text_model_vitb32.onnx
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Generative_AI/Image_to_Text/onnx_clip/model/clip_visual.onnx
+```
+
+
+
+### 项目结构
+
+```txt
+├── cfg
+├── clip_visual.cix
+├── clip_txt.cix
+├── datasets
+├── inference_npu.py
+├── inference_onnx.py
+├── model
+├── ReadMe.md
+└── test_data
+```
+
+### 进行模型量化和转换
+
+#### 转换图像模块
+
+
+
+```bash
+cd ..
+cixbuild cfg/clip_visualbuild.cfg
+```
+
+
+
+#### 转换文本模块
+
+
+
+```bash
+cixbuild cfg/clip_text_model_vitb32build.cfg
+```
+
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
+:::
+
+### 测试主机推理
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_onnx.py
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_onnx.py
+[[0.03632354 0.96057177 0.00310465]]
+test_data/000000464522.jpg, max similarity: a dog
+[[0.03074941 0.00429748 0.9649532 ]]
+test_data/000000032811.jpg, max similarity: a bird
+[[0.8280978 0.08798673 0.08391542]]
+test_data/000000010698.jpg, max similarity: a person
+```
+
+
+
+#### 测试图片
+
+
+
+{" "}
+
+
+
+

+
+
+

+
+
+
+{" "}
+
+
+

+
+
+
+
+### 进行 NPU 部署
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+#### 模型运行结果
+
+
+
+```bash
+$ python3 inference_npu.py
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+[[0.09763492 0.00929287 0.89307225]]
+test_data/000000032811.jpg, max similarity: a bird
+[[0.02777621 0.9682566 0.00396715]]
+test_data/000000464522.jpg, max similarity: a dog
+[[0.8495277 0.08247717 0.06799505]]
+test_data/000000010698.jpg, max similarity: a person
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+```
+
+
+
+#### 测试图片
+
+**同上。**
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_deeplab-v3.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_deeplab-v3.mdx
new file mode 100644
index 000000000..87ed21aec
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_deeplab-v3.mdx
@@ -0,0 +1,150 @@
+**DeepLabV3** 是一款由 Google 提出的经典语义分割模型。它通过深入探索空洞卷积(Atrous Convolution)的应用,在不增加参数量的前提下扩大了感受野,有效解决了在深度神经网络中处理多尺度物体分割时的空间信息丢失问题。
+
+- 核心特点: 擅长捕捉图像中复杂且细致的边缘信息,具备极强的多尺度目标感知能力,能够实现像素级的精准类别划分,广泛应用于医疗影像分析、自动驾驶环境感知以及卫星图像处理。
+- 版本说明: 本案例采用 DeepLabV3 架构。作为语义分割领域的基准模型之一,它利用改进的 ASPP 模块和全局平均池化技术,显著提升了对不同尺寸物体的识别精度。它在保持特征提取深度的同时,能够通过精妙的结构设计还原物体的空间结构,是目前工业界兼顾分割效果与技术成熟度的经典平衡选择。
+
+:::info[环境配置]
+需要提前配置好相关环境。
+
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
+
+## 快速开始
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Semantic_Segmentation/onnx_deeplab_v3
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Semantic_Segmentation/onnx_deeplab_v3/deeplab_v3.cix
+```
+
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Semantic_Segmentation/onnx_deeplab_v3/model
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Semantic_Segmentation/onnx_deeplab_v3/model/deeplabv3_resnet50.onnx
+```
+
+
+
+### 项目结构
+
+```txt
+├── cfg
+├── datasets
+├── deeplab_v3.cix
+├── inference_npu.py
+├── inference_onnx.py
+├── model
+├── ReadMe.md
+├── test_data
+└── Tutorials.ipynb
+```
+
+### 进行模型量化和转换
+
+
+
+```bash
+cd ..
+cixbuild cfg/onnx_deeplab_v3_build.cfg
+```
+
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
+:::
+
+### 测试主机推理
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_onnx.py --image_path test_data --onnx_path model/deeplabv3_resnet50.onnx
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_onnx.py --image_path test_data --onnx_path model/deeplabv3_resnet50.onnx
+save output: onnx_ILSVRC2012_val_00004704.JPEG
+```
+
+
+
+
+
+{" "}
+
+

+
+
+
+### 进行 NPU 部署
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_npu.py --image_path test_data --model_path deeplab_v3.cix
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_npu.py --image_path test_data --model_path deeplab_v3.cix
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+save output: npu_ILSVRC2012_val_00004704.JPEG
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+```
+
+
+
+
+
+{" "}
+
+

+
+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_deeplab_v3.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_deeplab_v3.mdx
deleted file mode 100644
index 723438f6d..000000000
--- a/docs/common/orion-common/app-dev/artificial-intelligence/_deeplab_v3.mdx
+++ /dev/null
@@ -1,215 +0,0 @@
-此文档介绍如何使用 CIX P1 NPU SDK 将 [deeplabv3](https://pytorch.org/vision/main/models/generated/torchvision.models.segmentation.deeplabv3_resnet50.html) 转换为 CIX SOC NPU 上可以运行的模型。
-
-整体来讲有四个步骤:
-:::tip
-步骤1~3 在 x86 主机 Linux 环境下执行
-:::
-
-1. 下载 NPU SDK 并安装 NOE Compiler
-2. 下载模型文件 (代码和脚本)
-3. 编译模型
-4. 部署模型到 Orion O6 / O6N
-
-## 下载 NPU SDK 并安装 NOE Compiler
-
-请参考 [安装 NPU SDK](./npu-introduction#npu-sdk-安装) 进行 NPU SDK 和 NOE Compiler 的安装.
-
-## 下载模型文件
-
-在 CIX AI Model Hub 中包含了 DeepLabv3 的所需文件, 请用户按照 [CIX AI Model Hub](./ai-hub#下载-cix-ai-model-hub) 下载
-
-```bash
-cd ai_model_hub/models/ComputeVision/Semantic_Segmentation/onnx_deeplab_v3
-```
-
-请确认目录结构是否同下图所示。
-
-```bash
-.
-├── cfg
-│ └── onnx_deeplab_v3_build.cfg
-├── datasets
-│ └── calibration_data.npy
-├── graph.json
-├── inference_npu.py
-├── inference_onnx.py
-├── ReadMe.md
-├── test_data
-│ └── ILSVRC2012_val_00004704.JPEG
-└── Tutorials.ipynb
-```
-
-## 编译模型
-
-:::tip
-用户可无需从头编译模型,radxa 提供预编译好的 deeplab_v3.cix 模型(可用下面步骤下载),如果使用预编译好的模型,可以跳过“编译模型” 这一步
-
-```bash
-wget https://modelscope.cn/models/cix/ai_model_hub_24_Q4/resolve/master/models/ComputeVision/Semantic_Segmentation/onnx_deeplab_v3/deeplab_v3.cix
-```
-
-:::
-
-### 准备 onnx 模型
-
-- 下载 onnx 模型
-
- [deeplabv3_resnet50.onnx](https://modelscope.cn/models/cix/ai_model_hub_24_Q4/resolve/master/models/ComputeVision/Semantic_Segmentation/onnx_deeplab_v3/model/deeplabv3_resnet50.onnx)
-
-- 简化模型
-
- 这里使用 onnxsim 进行模型输入固化和模型简化
-
- ```bash
- pip3 install onnxsim onnxruntime
- onnxsim deeplabv3_resnet50.onnx deeplabv3_resnet50-sim.onnx --overwrite-input-shape 1,3,520,520
- ```
-
-### 编译模型
-
-CIX SOC NPU 支持 INT8 计算,在编译模型前,我们需要使用 NOE Compiler 对模型进行 INT8 量化
-
-- 准备校准集
-
- - 使用 `datasets` 现有校准集
-
- ```bash
- .
- └── calibration_data.npy
- ```
-
- - 自行准备校准集
-
- 在 `test_data` 目录下已经包含多张校准集的图片文件
-
- ```bash
- .
- ├── 1.jpeg
- └── 2.jpeg
- ```
-
- 参考以下脚本生成校准文件
-
- ```python
- import sys
- import os
- import numpy as np
- _abs_path = os.path.join(os.getcwd(), "../../../../")
- sys.path.append(_abs_path)
- from utils.image_process import preprocess_image_deeplabv3
- from utils.tools import get_file_list
- # Get a list of images from the provided path
- images_path = "test_data"
- images_list = get_file_list(images_path)
- data = []
- for image_path in images_list:
- input = preprocess_image_deeplabv3(image_path)
- data.append(input)
- # concat the data and save calib dataset
- data = np.concatenate(data, axis=0)
- np.save("datasets/calib_data_tmp.npy", data)
- print("Generate calib dataset success.")
- ```
-
-- 使用 NOE Compiler 量化与编译模型
-
- - 制作量化与编译 cfg 配置文件, 请参考以下配置
-
- ```bash
- [Common]
- mode = build
-
- [Parser]
- model_type = onnx
- model_name = deeplab_v3
- detection_postprocess =
- model_domain = image_segmentation
- input_model = ./deeplabv3_resnet50-sim.onnx
- input = input
- input_shape = [1, 3, 520, 520]
- output = output
- output_dir = ./
-
- [Optimizer]
- output_dir = ./
- calibration_data = ./datasets/calib_data_tmp.npy
- calibration_batch_size = 1
- metric_batch_size = 1
- dataset = NumpyDataset
- quantize_method_for_weight = per_channel_symmetric_restricted_range
- quantize_method_for_activation = per_tensor_asymmetric
- save_statistic_info = True
-
- [GBuilder]
- outputs = deeplab_v3.cix
- target = X2_1204MP3
- profile = True
- tiling = fps
- ```
-
- - 编译模型
- :::tip
- 如果遇到 cixbuild 报错 `[E] Optimizing model failed! CUDA error: no kernel image is available for execution on the device ...`
- 这意味着当前版本的 torch 不支持此 GPU,请完全卸载当前版本的 torch, 然后在 torch 官网下载最新版本。
- :::
- ```bash
- cixbuild ./onnx_deeplab_v3_build.cfg
- ```
-
-## 模型部署
-
-### NPU 推理
-
-将使用 NOE Compiler 编译好的 .cix 格式的模型复制到 Orion O6 / O6N 开发板上进行模型验证
-
-```bash
-python3 inference_npu.py --images ./test_data/ --model_path ./deeplab_v3.ci
-```
-
-```bash
-(.venv) radxa@orion-o6:~/NOE/ai_model_hub/models/ComputeVision/Semantic_Segmentation/onnx_deeplab_v3$ time python3 inference_npu.py --images ./test_data/ --model_path ./deeplab_v3.cix
-npu: noe_init_context success
-npu: noe_load_graph success
-Input tensor count is 1.
-Output tensor count is 1.
-npu: noe_create_job success
-save output: noe_ILSVRC2012_val_00004704.JPEG
-npu: noe_clean_job success
-npu: noe_unload_graph success
-npu: noe_deinit_context success
-
-real 0m9.047s
-user 0m4.314s
-sys 0m0.478s
-```
-
-结果保存在 `output` 文件夹中
-
-
-
-### CPU 推理
-
-使用 CPU 对 onnx 模型进行推理验证正确性,可在 X86 主机上或 Orion O6 / O6N 上运行
-
-```bash
-python3 inference_onnx.py --images ./test_data/ --onnx_path ./deeplabv3_resnet50-sim.onnx
-```
-
-```bash
-(.venv) radxa@orion-o6:~/NOE/ai_model_hub/models/ComputeVision/Semantic_Segmentation/onnx_deeplab_v3$ time python3 inference_onnx.py --images ./test_data/ --onnx_path ./deeplabv3_resnet50-sim.onnx
-save output: onnx_ILSVRC2012_val_00004704.JPEG
-
-real 0m7.605s
-user 0m33.235s
-sys 0m0.558s
-
-```
-
-结果保存在 `output` 文件夹中
-
-
-可以看到 NPU 和 CPU 上推理的结果一致,但运行速度大幅缩短
-
-## 参考文档
-
-论文链接: [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587)
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_env-setup.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_env-setup.mdx
new file mode 100644
index 000000000..573e4be94
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_env-setup.mdx
@@ -0,0 +1,116 @@
+## 前言
+
+我们需要访问此芯开发者中心获取最新的此芯 AI 开发工具包 NOE SDK(NeuralONE AI SDK)。
+
+NOE SDK 通过 NPU 异构硬件加速,助力开发者高效开发并部署高能效的端侧 AI 推理应用。
+
+:::tip 此芯开发者中心
+
+此芯开发者中心包含软件 SDK、芯片手册、开发文档等资源。
+
+:::
+
+## 获取 SDK
+
+注册并登录[此芯开发者中心](https://developer.cixtech.com/),下拉找到软件 SDK 栏目,点击 NeuralONE AI SDK 的`了解详情`,会自动下载 SDK。
+
+解压SDK:
+
+
+
+```bash
+tar -zxvf cix_noe_sdk_xxx_release.tar.gz
+```
+
+
+
+解压后的文件夹目录结构:
+
+```bash
+.
+├── CixBuilder-6.1.3407.2-cp310-none-linux_x86_64.whl
+├── cix-noe-umd_2.0.2_arm64.deb
+├── cix-npu-driver_2.0.1_arm64.deb
+├── env_setup.sh
+├── npu_sdk_last_manifest_list.xml
+└── requirements.txt
+```
+
+## 配置主机环境
+
+### 创建虚拟环境
+
+推荐使用 [miniconda](https://www.anaconda.com/docs/getting-started/miniconda/main) 管理虚拟环境。
+
+:::warning[Python 版本]
+SDK 只兼容 python3.10 。
+:::
+
+
+
+```bash
+conda create -n noe python=3.10
+conda activate noe
+```
+
+
+
+### 使用脚本配置开发环境
+
+
+
+```bash
+bash env_setup.sh
+```
+
+
+
+### 验证编译环境
+
+终端执行 cixbuild -v 查看编译环境版本。
+
+
+
+```bash
+cixbuild -v
+```
+
+
+
+## 配置板端环境
+
+### 安装 NPU 驱动
+
+:::tip[NPU驱动]
+瑞莎星睿 O6 / O6N 的官方系统镜像已经预先安装好了NPU驱动,无需重复安装。
+:::
+
+将 NPU 驱动安装包推送到板端,执行下面的命令进行安装。
+
+
+
+```bash
+sudo dpkg -i ./cix-npu-driver_xxx_arm64.deb
+```
+
+
+
+### 安装 NPU UMD
+
+:::info[UMD]
+UMD 即 User Mode Driver
+
+UMD 以动态库形式提供标准 API,负责解析应用请求并协调 NPU 驱动实现资源分配与任务下发。
+
+详细的 API 使用参考 [**API 手册**](../../../../orion/o6/app-development/artificial-intelligence/API-manual.md)。
+:::
+
+将 UMD 安装包推送到板端,执行下面的命令安装。
+
+
+
+```bash
+sudo dpkg -i ./cix-noe-umd_xxx_arm64.deb
+```
+
+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_fast-scnn.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_fast-scnn.mdx
new file mode 100644
index 000000000..47defe995
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_fast-scnn.mdx
@@ -0,0 +1,128 @@
+**Fast-SCNN** 是一款专为高分辨率图像实时语义分割设计的轻量级卷积神经网络。它采用了创新的多分支架构,通过共享特征提取模块与轻量化的设计思路,有效解决了传统分割模型在处理大尺寸图像时计算压力过大的痛点。
+
+- 核心特点:专注于像素级的实时语义分割,能够在极低延迟下对复杂场景进行类别标注,广泛应用于自动驾驶、移动端增强现实(AR)以及机器人避障等对响应速度要求极高的领域。
+- 版本说明:本案例采用 Fast-SCNN 模型。该模型通过独特的“学习下采样”模块结合全局特征提取技术,在不牺牲核心空间细节的前提下大幅提升了推理效率。它摆脱了对高性能图形处理器的依赖,是目前在嵌入式端实现高分辨率实时图像理解的主流轻量化选择。
+
+:::info[环境配置]
+需要提前配置好相关环境。
+
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
+
+## 快速开始
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Semantic_Segmentation/torch_fast_scnn
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Semantic_Segmentation/torch_fast_scnn/fast_scnn.cix
+```
+
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 项目结构
+
+```txt
+├── cfg
+├── datasets
+├── fast_scnn.cix
+├── inference_npu.py
+├── inference_pt.py
+├── model
+├── ReadMe.md
+└── test_data
+```
+
+### 进行模型量化和转换
+
+
+
+```bash
+cd ..
+cixbuild cfg/fast_scnnbuild.cfg
+```
+
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
+:::
+
+### 测试主机推理
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_pt.py
+```
+
+
+
+#### 模型推理结果
+
+
+
+{" "}
+
+

+
+
+
+### 进行 NPU 部署
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python inference_npu.py
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+```
+
+
+
+
+
+{" "}
+
+

+
+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_midas-v2.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_midas-v2.mdx
new file mode 100644
index 000000000..3eed57c53
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_midas-v2.mdx
@@ -0,0 +1,271 @@
+**MiDaS** 是一款专注于单目深度估计(Monocular Depth Estimation)的前沿深度学习模型。它摆脱了传统算法对双目相机或红外传感器的依赖,仅凭单幅彩色图像即可推算出场景的相对深度信息,有效地将二维图像转化为具备空间层次感的深度图。
+
+- 核心特点:具备卓越的零样本(Zero-shot)泛化能力,能够从容应对从未见过的室内外复杂环境,生成的深度图具有清晰的物体轮廓与平滑的层次过渡,广泛应用于增强现实(AR)、照片背景虚化、机器人避障及 3D 场景重建。
+- 版本说明:本案例采用 MiDaS v2 版本。作为该系列的经典成熟版本,它通过在大规模混合数据集上的预训练,解决了单目深度估计中常见的场景局限性问题。在保持主流推理速度的同时,它能够提供稳定且具有高空间保真度的深度预测,是目前工业界实现低成本、高质量空间感知任务的平衡优选。
+
+:::info[环境配置]
+需要提前配置好相关环境。
+
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
+
+## 快速开始
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Depth_Estimation/onnx_MiDaS_v2
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Depth_Estimation/onnx_MiDaS_v2/MiDaS_v2.cix
+```
+
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Depth_Estimation/onnx_MiDaS_v2/model
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Depth_Estimation/onnx_MiDaS_v2/model/MiDaS_v2.onnx
+```
+
+
+
+### 项目结构
+
+```txt
+├── cfg
+├── datasets
+├── inference_npu.py
+├── inference_onnx.py
+├── model
+├── README.md
+├── test_data
+└── MiDaS_v2.cix
+```
+
+### 进行模型量化和转换
+
+
+
+```bash
+cd ..
+cixbuild cfg/onnx_MiDasV2build.cfg
+```
+
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
+:::
+
+### 测试主机推理
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_onnx.py
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_onnx.py
+initialize
+loading model...
+ processing ./test_data/1.jpg
+Inference time: 18.44 ms
+ processing ./test_data/2.jpg
+Inference time: 16.14 ms
+ processing ./test_data/3.jpg
+Inference time: 15.61 ms
+Finished
+```
+
+
+
+
+
+{/* 左侧容器:放竖图 */}
+
+{" "}
+
+
+

+
+
+{/* 右侧容器:放两张横图 */}
+
+{" "}
+
+
+
+

+
+
+

+
+
+
+
+
+### 进行 NPU 部署
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_npu.py
+initialize
+loading model...
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+ processing ./test_data/3.jpg
+Inference time: 4.72 ms
+ processing ./test_data/2.jpg
+Inference time: 6.10 ms
+ processing ./test_data/1.jpg
+Inference time: 6.42 ms
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+finished
+```
+
+
+
+
+
+{/* 左侧容器:放竖图 */}
+
+{" "}
+
+
+

+
+
+{/* 右侧容器:放两张横图 */}
+
+{" "}
+
+
+
+

+
+
+

+
+
+
+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_minicpm-o-2-6.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_minicpm-o-2-6.mdx
new file mode 100644
index 000000000..5cb598558
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_minicpm-o-2-6.mdx
@@ -0,0 +1,308 @@
+**Qwen2-VL** 是由阿里云通义千问团队研发的开源多模态视觉语言模型系列。
+该系列模型通过统一的视觉编码器与大语言模型基座深度融合,旨在实现强大的图像理解、细粒度推理与开放世界对话能力。
+
+- 核心特点:系列模型普遍具备高效的视觉语义对齐能力,支持对图像内容的精准描述、复杂问答、逻辑推理以及多轮交互。
+ 其架构设计兼顾了性能与效率,在文档分析、智能助理、多模态搜索等场景中有广泛的应用潜力。
+- 版本说明:本模型 Qwen2-VL-2B-Instruct 是该系列的轻量化实践版本,参数量约为20亿,并经过指令微调优化,适合在端侧与低资源环境中部署,实现实时多模态交互。
+
+## 环境配置
+
+参考 [llama.cpp](../../../../orion/o6/app-development/artificial-intelligence/llama_cpp.md) 文档准备好 llama.cpp 工具。
+
+## 快速开始
+
+### 下载模型
+
+
+
+```bash
+pip3 install modelscope
+cd llama.cpp
+modelscope download --model radxa/Qwen2-VL-2B-Instruct-NOE mmproj-Qwen2-VL-2b-Instruct-F16.gguf --local_dir ./
+modelscope download --model radxa/Qwen2-VL-2B-Instruct-NOE Qwen2-VL-2B-Instruct-Q5_K_M.gguf --local_dir ./
+modelscope download --model radxa/Qwen2-VL-2B-Instruct-NOE test.png --local_dir ./
+```
+
+
+
+### 运行模型
+
+
+
+```bash
+./build/bin/llama-mtmd-cli -m ./Qwen2-VL-2B-Instruct-Q5_K_M.gguf --mmproj ./mmproj-Qwen2-VL-2b-Instruct-F16.gguf -p "Describe this image." --image ./test.png
+```
+
+
+
+## 完整转换流程
+
+### 克隆模型仓库
+
+
+
+```bash
+cd llama.cpp
+git clone https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct
+```
+
+
+
+### 创建虚拟环境
+
+
+
+```bash
+python3 -m venv .venv
+source .venv/bin/activate
+pip3 install -r requirements.txt
+```
+
+
+
+### 模型转换
+
+#### 转换文本模块
+
+
+
+```bash
+python3 ./convert_hf_to_gguf.py ./Qwen2-VL-2B-Instruct
+```
+
+
+
+#### 转换视觉模块
+
+
+
+```bash
+python3 ./convert_hf_to_gguf.py --mmproj ./Qwen2-VL-2B-Instruct
+```
+
+
+
+### 模型量化
+
+这里采用 Q5_K_M 量化。
+
+
+
+```bash
+./build/bin/llama-quantize ./Qwen2-VL-2B-Instruct/Qwen2-VL-2B-Instruct-F16.gguf ./Qwen2-VL-2B-Instruct/Qwen2-VL-2B-Instruct-Q5_K_M.gguf Q5_K_M
+```
+
+
+
+### 模型测试
+
+
+

+
+ 模型测试输入
+
+
+
+
+
+```bash
+./build/bin/llama-mtmd-cli -m ./Qwen2-VL-2B-Instruct/Qwen2-VL-2B-Instruct-Q5_K_M.gguf --mmproj ./Qwen2-VL-2B-Instruct/mmproj-Qwen2-VL-2b-Instruct-F16.gguf -p "Describe this image." --image ./Qwen2-VL-2B-Instruct/test.png
+```
+
+
+
+模型输出:
+
+```bash
+$ ./build/bin/llama-mtmd-cli -m ./Qwen2-VL-2B-Instruct/Qwen2-VL-2B-Instruct-Q5_K_M.gguf --mmproj ./Qwen2-VL-2B-Instruct/mmproj-Qwen
+2-VL-2b-Instruct-F16.gguf -p "Describe this image." --image ./Qwen2-VL-2B-Instruct/test.png
+build: 7110 (3ae282a06) with cc (Debian 12.2.0-14+deb12u1) 12.2.0 for aarch64-linux-gnu
+llama_model_loader: loaded meta data with 33 key-value pairs and 338 tensors from ./Qwen2-VL-2B-Instruct/Qwen2-VL-2B-Instruct-Q5_K_M.gguf (version GGUF V3 (latest))
+llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
+llama_model_loader: - kv 0: general.architecture str = qwen2vl
+llama_model_loader: - kv 1: general.type str = model
+llama_model_loader: - kv 2: general.name str = Qwen2 VL 2B Instruct
+llama_model_loader: - kv 3: general.finetune str = Instruct
+llama_model_loader: - kv 4: general.basename str = Qwen2-VL
+llama_model_loader: - kv 5: general.size_label str = 2B
+llama_model_loader: - kv 6: general.license str = apache-2.0
+llama_model_loader: - kv 7: general.base_model.count u32 = 1
+llama_model_loader: - kv 8: general.base_model.0.name str = Qwen2 VL 2B
+llama_model_loader: - kv 9: general.base_model.0.organization str = Qwen
+llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2-VL-2B
+llama_model_loader: - kv 11: general.tags arr[str,2] = ["multimodal", "image-text-to-text"]
+llama_model_loader: - kv 12: general.languages arr[str,1] = ["en"]
+llama_model_loader: - kv 13: qwen2vl.block_count u32 = 28
+llama_model_loader: - kv 14: qwen2vl.context_length u32 = 32768
+llama_model_loader: - kv 15: qwen2vl.embedding_length u32 = 1536
+llama_model_loader: - kv 16: qwen2vl.feed_forward_length u32 = 8960
+llama_model_loader: - kv 17: qwen2vl.attention.head_count u32 = 12
+llama_model_loader: - kv 18: qwen2vl.attention.head_count_kv u32 = 2
+llama_model_loader: - kv 19: qwen2vl.rope.freq_base f32 = 1000000.000000
+llama_model_loader: - kv 20: qwen2vl.attention.layer_norm_rms_epsilon f32 = 0.000001
+llama_model_loader: - kv 21: qwen2vl.rope.dimension_sections arr[i32,4] = [16, 24, 24, 0]
+llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2
+llama_model_loader: - kv 23: tokenizer.ggml.pre str = qwen2
+llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ...
+llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
+llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
+llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 151645
+llama_model_loader: - kv 28: tokenizer.ggml.padding_token_id u32 = 151643
+llama_model_loader: - kv 29: tokenizer.ggml.bos_token_id u32 = 151643
+llama_model_loader: - kv 30: tokenizer.chat_template str = {% set image_count = namespace(value=...
+llama_model_loader: - kv 31: general.quantization_version u32 = 2
+llama_model_loader: - kv 32: general.file_type u32 = 17
+llama_model_loader: - type f32: 141 tensors
+llama_model_loader: - type q5_K: 168 tensors
+llama_model_loader: - type q6_K: 29 tensors
+print_info: file format = GGUF V3 (latest)
+print_info: file type = Q5_K - Medium
+print_info: file size = 1.04 GiB (5.80 BPW)
+load: printing all EOG tokens:
+load: - 151643 ('<|endoftext|>')
+load: - 151645 ('<|im_end|>')
+load: special tokens cache size = 14
+load: token to piece cache size = 0.9309 MB
+print_info: arch = qwen2vl
+print_info: vocab_only = 0
+print_info: n_ctx_train = 32768
+print_info: n_embd = 1536
+print_info: n_embd_inp = 1536
+print_info: n_layer = 28
+print_info: n_head = 12
+print_info: n_head_kv = 2
+print_info: n_rot = 128
+print_info: n_swa = 0
+print_info: is_swa_any = 0
+print_info: n_embd_head_k = 128
+print_info: n_embd_head_v = 128
+print_info: n_gqa = 6
+print_info: n_embd_k_gqa = 256
+print_info: n_embd_v_gqa = 256
+print_info: f_norm_eps = 0.0e+00
+print_info: f_norm_rms_eps = 1.0e-06
+print_info: f_clamp_kqv = 0.0e+00
+print_info: f_max_alibi_bias = 0.0e+00
+print_info: f_logit_scale = 0.0e+00
+print_info: f_attn_scale = 0.0e+00
+print_info: n_ff = 8960
+print_info: n_expert = 0
+print_info: n_expert_used = 0
+print_info: n_expert_groups = 0
+print_info: n_group_used = 0
+print_info: causal attn = 1
+print_info: pooling type = -1
+print_info: rope type = 8
+print_info: rope scaling = linear
+print_info: freq_base_train = 1000000.0
+print_info: freq_scale_train = 1
+print_info: n_ctx_orig_yarn = 32768
+print_info: rope_finetuned = unknown
+print_info: mrope sections = [16, 24, 24, 0]
+print_info: model type = 1.5B
+print_info: model params = 1.54 B
+print_info: general.name = Qwen2 VL 2B Instruct
+print_info: vocab type = BPE
+print_info: n_vocab = 151936
+print_info: n_merges = 151387
+print_info: BOS token = 151643 '<|endoftext|>'
+print_info: EOS token = 151645 '<|im_end|>'
+print_info: EOT token = 151645 '<|im_end|>'
+print_info: PAD token = 151643 '<|endoftext|>'
+print_info: LF token = 198 'Ċ'
+print_info: EOG token = 151643 '<|endoftext|>'
+print_info: EOG token = 151645 '<|im_end|>'
+print_info: max token length = 256
+load_tensors: loading model tensors, this can take a while... (mmap = true)
+load_tensors: CPU_Mapped model buffer size = 1067.26 MiB
+....................................................................................
+llama_context: constructing llama_context
+llama_context: n_seq_max = 1
+llama_context: n_ctx = 4096
+llama_context: n_ctx_seq = 4096
+llama_context: n_batch = 2048
+llama_context: n_ubatch = 512
+llama_context: causal_attn = 1
+llama_context: flash_attn = auto
+llama_context: kv_unified = false
+llama_context: freq_base = 1000000.0
+llama_context: freq_scale = 1
+llama_context: n_ctx_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
+llama_context: CPU output buffer size = 0.58 MiB
+llama_kv_cache: CPU KV buffer size = 112.00 MiB
+llama_kv_cache: size = 112.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 56.00 MiB, V (f16): 56.00 MiB
+llama_context: Flash Attention was auto, set to enabled
+llama_context: CPU compute buffer size = 302.75 MiB
+llama_context: graph nodes = 959
+llama_context: graph splits = 1
+common_init_from_params: added <|endoftext|> logit bias = -inf
+common_init_from_params: added <|im_end|> logit bias = -inf
+common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
+common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
+mtmd_cli_context: chat template example:
+<|im_start|>system
+You are a helpful assistant<|im_end|>
+<|im_start|>user
+Hello<|im_end|>
+<|im_start|>assistant
+Hi there<|im_end|>
+<|im_start|>user
+How are you?<|im_end|>
+<|im_start|>assistant
+
+clip_model_loader: model name: Qwen2 VL 2B Instruct
+clip_model_loader: description:
+clip_model_loader: GGUF version: 3
+clip_model_loader: alignment: 32
+clip_model_loader: n_tensors: 520
+clip_model_loader: n_kv: 27
+
+clip_model_loader: has vision encoder
+clip_ctx: CLIP using CPU backend
+load_hparams: Qwen-VL models require at minimum 1024 image tokens to function correctly on grounding tasks
+load_hparams: if you encounter problems with accuracy, try adding --image-min-tokens 1024
+load_hparams: more info: https://github.com/ggml-org/llama.cpp/issues/16842
+
+load_hparams: projector: qwen2vl_merger
+load_hparams: n_embd: 1280
+load_hparams: n_head: 16
+load_hparams: n_ff: 1536
+load_hparams: n_layer: 32
+load_hparams: ffn_op: gelu_quick
+load_hparams: projection_dim: 1536
+
+--- vision hparams ---
+load_hparams: image_size: 560
+load_hparams: patch_size: 14
+load_hparams: has_llava_proj: 0
+load_hparams: minicpmv_version: 0
+load_hparams: n_merge: 2
+load_hparams: n_wa_pattern: 0
+load_hparams: image_min_pixels: 6272
+load_hparams: image_max_pixels: 3211264
+
+load_hparams: model size: 1269.94 MiB
+load_hparams: metadata size: 0.18 MiB
+alloc_compute_meta: warmup with image size = 1288 x 1288
+alloc_compute_meta: CPU compute buffer size = 267.08 MiB
+alloc_compute_meta: graph splits = 1, nodes = 1085
+warmup: flash attention is enabled
+main: loading model: ./Qwen2-VL-2B-Instruct/Qwen2-VL-2B-Instruct-Q5_K_M.gguf
+encoding image slice...
+image slice encoded in 11683 ms
+decoding image batch 1/1, n_tokens_batch = 361
+image decoded (batch 1/1) in 6250 ms
+
+The image depicts a single rose placed on a marble surface, likely a table or a shelf. The rose is positioned in such a way that it is slightly tilted, with its petals facing upwards. The background features a dark, possibly stone or marble, wall with a textured surface, and a window or mirror reflecting the surroundings. The overall composition of the image creates a serene and elegant atmosphere.
+
+
+llama_perf_context_print: load time = 416.66 ms
+llama_perf_context_print: prompt eval time = 18253.30 ms / 375 tokens ( 48.68 ms per token, 20.54 tokens per second)
+llama_perf_context_print: eval time = 5283.83 ms / 78 runs ( 67.74 ms per token, 14.76 tokens per second)
+llama_perf_context_print: total time = 23892.18 ms / 453 tokens
+llama_perf_context_print: graphs reused = 0
+```
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_mobilenet-v2.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_mobilenet-v2.mdx
new file mode 100644
index 000000000..d39d50564
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_mobilenet-v2.mdx
@@ -0,0 +1,152 @@
+**MobileNet** 是由 Google 专门为移动端和嵌入式设备设计的轻量级深度神经网络系列。它通过革新卷积计算方式,大幅降低了模型的参数量与计算复杂度,使得高性能视觉算法能够在智能手机、物联网终端等算力受限的设备上实时运行。
+
+- 核心特点:支持高效的图像分类、目标检测及语义分割,通过极低的延迟实现高质量的视觉感知,是移动端深度学习应用的核心引擎。
+- 版本说明:本案例采用 MobileNetV2 模型。作为该系列的进阶版本,它采用了独特的“倒残差与线性瓶颈”架构,不仅提升了内存利用率,还增强了复杂特征的提取能力。它是目前移动视觉领域兼具极速推理与优秀精度的行业标杆,是端侧AI应用中最具性价比的平衡选择。
+
+:::info[环境配置]
+需要提前配置好相关环境。
+
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
+
+## 快速开始
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Image_Classification/onnx_mobilenet_v2
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Image_Classification/onnx_mobilenet_v2/mobilenet_v2.cix
+```
+
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Image_Classification/onnx_mobilenet_v2/model
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Image_Classification/onnx_mobilenet_v2/model/mobilenet_v2.onnx
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Image_Classification/onnx_mobilenet_v2/model/mobilenetv2-7.onnx
+```
+
+
+
+### 项目结构
+
+```txt
+├── cfg
+├── datasets
+├── inference_npu.py
+├── inference_onnx.py
+├── mobilenet_v2.cix
+├── model
+├── ReadMe.md
+└── test_data
+```
+
+### 进行模型量化和转换
+
+
+
+```bash
+cd ..
+cixbuild cfg/onnx_mobilenet_v2build.cfg
+```
+
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
+:::
+
+### 测试主机推理
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_onnx.py --images test_data --onnx_path model/mobilenetv2-7.onnx
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_onnx.py --images test_data --onnx_path model/mobilenetv2-7.onnx
+image path : test_data/ILSVRC2012_val_00024154.JPEG
+Ibizan hound, Ibizan Podenco
+image path : test_data/ILSVRC2012_val_00021564.JPEG
+coucal
+image path : test_data/ILSVRC2012_val_00002899.JPEG
+rock python, rock snake, Python sebae
+image path : test_data/ILSVRC2012_val_00045790.JPEG
+Yorkshire terrier
+image path : test_data/ILSVRC2012_val_00037133.JPEG
+ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
+```
+
+
+
+### 进行 NPU 部署
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_npu.py --images test_data --model_path mobilenet_v2.cix
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_npu.py --images test_data --model_path mobilenet_v2.cix
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+image path : ./test_data/ILSVRC2012_val_00037133.JPEG
+ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
+image path : ./test_data/ILSVRC2012_val_00021564.JPEG
+coucal
+image path : ./test_data/ILSVRC2012_val_00024154.JPEG
+Ibizan hound, Ibizan Podenco
+image path : ./test_data/ILSVRC2012_val_00002899.JPEG
+boa constrictor, Constrictor constrictor
+image path : ./test_data/ILSVRC2012_val_00045790.JPEG
+Yorkshire terrier
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+```
+
+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_npu-introduction.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_npu-introduction.mdx
deleted file mode 100644
index 5c1852c58..000000000
--- a/docs/common/orion-common/app-dev/artificial-intelligence/_npu-introduction.mdx
+++ /dev/null
@@ -1,67 +0,0 @@
-## NPU 简介
-
-瑞莎星睿 O6 / O6N 拥有高达 28.8 TOPS NPU (Neural Processing Unit) 算力,支持 INT4 / INT8 / INT16 / FP16 / BF16 和 TF32 类型的加速。
-
-此文档主要讲解用户如何使用 CIX P1 NPU SDK 运行基于 NPU 加速推理的人工智能模型和应用。包括模型的编译工具、工具链和一些常见模型案例的使用方法进行逐步讲解。
-
-## 此芯 SDK
-
-我们需要访问[此芯开发者中心](https://developer.cixtech.com/)获取最新的此芯 AI 开发工具包(NeuralONE AI SDK)。
-
-此芯P1AI开发工具包,支持NPU等异构硬件加速,助力开发者开发端侧AI应用,实现高能效端侧AI推理。
-
-:::tip 此芯开发者中心
-
-此芯开发者中心包含软件 SDK、芯片手册、开发文档等资源。
-
-:::
-
-### 下载 SDK
-
-注册并登录[此芯开发者中心](https://developer.cixtech.com/),在软件 SDK 内点击 NeuralONE AI SDK 选项的了解更多选项,会自动下载 SDK。
-
-### 解压 SDK
-
-```bash
-tar -xvf cix_noe_sdk_xxx_release.tar.gz
-```
-
-解压后的文件夹中包含以下内容:
-
-- cix-noe-umd_xxx_arm64.deb
-- cix-npu-driver_xxx_arm64.deb
-- CixBuilder_xxx-cp310-none-linux_x86_64.whl
-- env_setup.sh
-- npu_sdk_last_manifest_list.xml
-- requirements.txt
-
-### 安装 NPU 驱动
-
-进入解压后的文件夹,执行以下命令安装 NPU 驱动。
-
-```bash
-sudo dpkg -i ./cix-npu-driver_xxx_arm64.deb
-```
-
-### 安装 NOE 编译器
-
-NOE 编译器用于模型的编译,将 ONNX 模型框架的模型格式转换为可以使用 NPU 进行加速推理的模型格式
-
-```bash
-pip3 install -r requirements.txt
-pip3 install ./CixBuilder_xxx-cp310-none-linux_x86_64.whl
-```
-
-### 安装 NOE UMD
-
-```bash
-sudo dpkg -i ./cix-noe-umd_xxx_arm64.deb
-```
-
-### 验证安装
-
-使用 cixbuild 命令验证 NOE 编译器是否安装成功。
-
-```bash
-cixbuild -v
-```
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_openpose.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_openpose.mdx
index 8f955e031..1c08ba7ee 100644
--- a/docs/common/orion-common/app-dev/artificial-intelligence/_openpose.mdx
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_openpose.mdx
@@ -1,219 +1,141 @@
-此文档介绍如何使用 CIX P1 NPU SDK 将 [OpenPose](https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch) 转换为 CIX SOC NPU 上可以运行的模型。
+**OpenPose** 是一款开创性的实时多人姿态估计架构。它通过首创的部分亲和字段(PAFs)自底向上(Bottom-Up)的处理机制,打破了传统算法在处理多人场景时计算量随人数增加而大幅提升的局限,实现了对复杂人群中人体骨架的快速重建。
-整体来讲有四个步骤:
-:::tip
-步骤1~3 在 x86 主机 Linux 环境下执行
-:::
+- 核心特点:支持同时对多人进行人体、手部、面部以及足部的关键点提取,具备极强的多尺度感知能力和空间结构建模能力,广泛应用于动作捕获、人机交互、体育竞技分析及安防行为识别等领域。
+- 版本说明:本案例采用 OpenPose 标准架构。作为姿态估计领域的基石,它通过双分支网络同时回归关键点热力图与肢体关联向量,有效解决了复杂遮挡与人体交叠情况下的关联难题。凭借其卓越的算法通用性与成熟的生态支持,它依然是目前实现高精度、多维度人体感知任务中最可靠的经典选择。
-1. 下载 NPU SDK 并安装 NOE Compiler
-2. 下载模型文件 (代码和脚本)
-3. 编译模型
-4. 部署模型到 Orion O6 / O6N
+:::info[环境配置]
+需要提前配置好相关环境。
-## 下载 NPU SDK 并安装 NOE Compiler
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
-请参考 [安装 NPU SDK](./npu-introduction#npu-sdk-安装) 进行 NPU SDK 和 NOE Compiler 的安装.
+## 快速开始
-## 下载模型文件
+### 下载模型文件
-在 CIX AI Model Hub 中包含了 Openose 的所需文件, 请用户按照 [下载 CIX AI Model Hub](./ai-hub#下载-cix-ai-model-hub) 下载,然后到对应的目录下查看
+
```bash
-cd ai_model_hub/models/ComputeVision/Pose_Estimation/onnx_openpose
+cd ai_model_hub_25_Q3/models/ComputeVision/Pose_Estimation/onnx_openpose
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Pose_Estimation/onnx_openpose/human-pose-estimation.cix
```
-请确认目录结构是否同下图所示
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
```bash
-.
+cd ai_model_hub_25_Q3/models/ComputeVision/Pose_Estimation/onnx_openpose
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Pose_Estimation/onnx_openpose/model/human-pose-estimation.onnx
+```
+
+
+
+### 项目结构
+
+```txt
├── cfg
-│ ├── human-pose-estimationbuild.cfg
-│ └── opt_template.json
├── datasets
-│ └── calib_data_my.npy
+├── human-pose-estimation.cix
├── inference_npu.py
├── inference_onnx.py
-├── output_onnx.jpg
+├── model
├── ReadMe.md
└── test_data
- ├── 1.jpeg
- └── 2.jpeg
```
-## 编译模型
+### 进行模型量化和转换
-:::tip
-用户可无需从头编译模型,radxa 提供预编译好的 human-pose-estimation.cix 模型(可用下面步骤下载),如果使用预编译好的模型,可以跳过“编译模型” 这一步
+
```bash
-wget https://modelscope.cn/models/cix/ai_model_hub_24_Q4/resolve/master/models/ComputeVision/Pose_Estimation/onnx_openpose/human-pose-estimation.cix
+cd ..
+cixbuild cfg/human-pose-estimationbuild.cfg
```
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
:::
-### 准备 onnx 模型
-
-- 下载 onnx 模型
-
- [human-pose-estimation.onnx](https://modelscope.cn/models/cix/ai_model_hub_24_Q4/resolve/master/models/ComputeVision/Pose_Estimation/onnx_openpose/model/human-pose-estimation.onnx)
-
-- 简化模型
-
- 这里使用 onnxsim 进行模型输入固化和模型简化
-
- ```bash
- pip3 install onnxsim onnxruntime
- onnxsim human-pose-estimation.onnx human-pose-estimation-sim.onnx --overwrite-input-shape 1,3,256,360
- ```
-
-### 编译模型
-
-CIX SOC NPU 支持 INT8 计算,在编译模型前,我们需要使用 NOE Compiler 对模型进行 INT8 量化
-
-- 准备校准集
-
- - 使用 `datasets` 现有校准集
-
- ```bash
- .
- └── calib_data_my.npy
- ```
-
- - 自行准备校准集
-
- 在 `test_data` 目录下已经包含多张校准集的图片文件
-
- ```bash
- .
- ├── 1.jpeg
- └── 2.jpeg
- ```
-
- 参考以下脚本生成校准文件
-
- ```python
- import sys
- import os
- import numpy as np
- import cv2
- _abs_path = os.path.join(os.getcwd(), "../../../../")
- sys.path.append(_abs_path)
- from utils.image_process import preprocess_openpose
- from utils.tools import get_file_list
- # Get a list of images from the provided path
- images_path = "test_data"
- images_list = get_file_list(images_path)
- data = []
- for image_path in images_list:
- img_numpy = cv2.imread(image_path)
- input = preprocess_openpose(img_numpy, 256)[0]
- data.append(input)
- # concat the data and save calib dataset
- data = np.concatenate(data, axis=0)
- np.save("datasets/calib_data_tmp.npy", data)
- print("Generate calib dataset success.")
- ```
-
-- 使用 NOE Compiler 量化与编译模型
-
- - 制作量化与编译 cfg 配置文件, 请参考以下配置
-
- ```bash
- [Common]
- mode = build
-
- [Parser]
- model_type = ONNX
- model_name = human-pose-estimation
- detection_postprocess =
- model_domain = OBJECT_DETECTION
- input_data_format = NCHW
- input_model = ./human-pose-estimation-sim.onnx
- input = images
- input_shape = [1, 3, 256, 360]
- output_dir = ./
-
- [Optimizer]
- dataset = numpydataset
- calibration_data = ./datasets/calib_data_tmp.npy
- calibration_batch_size = 1
- output_dir = ./
- quantize_method_for_activation = per_tensor_asymmetric
- quantize_method_for_weight = per_channel_symmetric_restricted_range
- save_statistic_info = True
- opt_config = cfg/opt_template.json
- cast_dtypes_for_lib = True
-
- [GBuilder]
- target = X2_1204MP3
- outputs = human-pose-estimation.cix
- tiling = fps
- ```
-
- - 编译模型
- :::tip
- 如果遇到 cixbuild 报错 `[E] Optimizing model failed! CUDA error: no kernel image is available for execution on the device ...`
- 这意味着当前版本的 torch 不支持此 GPU,请完全卸载当前版本的 torch, 然后在 torch 官网下载最新版本。
- :::
- ```bash
- cixbuild ./human-pose-estimationbuild.cfg
- ```
-
-## 模型部署
-
-### NPU 推理
-
-将使用 NOE Compiler 编译好的 .cix 格式的模型复制到 Orion O6 / O6N 上进行模型验证
+### 测试主机推理
+
+#### 运行推理脚本
+
+
```bash
-python3 inference_npu.py --image_path ./test_data/ --model_path human-pose-estimation.cix
+python3 inference_onnx.py
```
-```bash
-(.venv) radxa@orion-o6:~/NOE/ai_model_hub/models/ComputeVision/Pose_Estimation/onnx_openpose$ time python3 inference_npu.py --image_path ./test_data/ --model_path human-pose-estimation.cix
-npu: noe_init_context success
-npu: noe_load_graph success
-Input tensor count is 1.
-Output tensor count is 4.
-npu: noe_create_job success
-npu: noe_clean_job success
-npu: noe_unload_graph success
-npu: noe_deinit_context success
+
-real 0m2.788s
-user 0m3.158s
-sys 0m0.276s
-```
+#### 模型推理结果
+
+
+
+{" "}
-结果保存在 `output` 文件夹中
+

+

-
+
-
+### 进行 NPU 部署
-### CPU 推理
+#### 运行推理脚本
-使用 CPU 对 onnx 模型进行推理验证正确性,可在 X86 主机上或 Orion O6 / O6N 上运行
+
```bash
-python3 inference_onnx.py --image_path ./test_data/ --onnx_path ./yolov8l.onnx
+python3 inference_npu.py
```
-```bash
-(.venv) radxa@orion-o6:~/NOE/ai_model_hub/models/ComputeVision/Pose_Estimation/onnx_openpose$ time python3 inference_onnx.py --image_path ./test_data/ --onnx_path human-pose-estimation.onnx
+
+
+#### 模型推理结果
-real 0m3.138s
-user 0m6.961s
-sys 0m0.318s
+
+
+```bash
+$ python3 inference_npu.py
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 4.
+npu: noe_create_job success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
```
-结果保存在 `output` 文件夹中
-
+
-
+
-可以看到 NPU 和 CPU 上推理的结果一致,但运行速度大幅缩短
+{" "}
-## 参考文档
+

+

-论文链接: [Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose](https://arxiv.org/abs/1811.12004)
+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_pp-ocr-v4.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_pp-ocr-v4.mdx
new file mode 100644
index 000000000..b676e8d92
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_pp-ocr-v4.mdx
@@ -0,0 +1,277 @@
+**PP-OCR** 是由百度研发的开源通用 OCR 模型系列。它采用了一套完整的端到端视觉识别链路,涵盖了文字检测、方向分类和文字识别三大核心模块,旨在为开发者提供在各种复杂环境下都能稳定运行的文本提取能力。
+
+- 核心特点:支持多语种的高精度文字提取与识别,具备极强的背景噪声抑制能力和对倾斜、模糊文字的鲁棒性,广泛应用于文档数字化、工业质检、车牌识别及自动驾驶场景。
+- 版本说明:本案例采用 PP-OCRv4 模型。作为该系列的最新进阶版本,它通过引入更加轻量且强大的检测架构与识别蒸馏技术,在不增加额外计算开销的前提下,显著提升了对于小文字和生僻字的识别准确度。它是目前工业界在处理移动端实时文本分析任务时,兼顾高精度与极致推理速度的轻量化最优选择。
+
+:::info[环境配置]
+需要提前配置好相关环境。
+
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
+
+## 快速开始
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/OCR/onnx_PP_OCRv4
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/OCR/onnx_PP_OCRv4/cls.cix
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/OCR/onnx_PP_OCRv4/PP-OCRv4_det.cix
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/OCR/onnx_PP_OCRv4/rec.cix
+```
+
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/OCR/onnx_PP_OCRv4/model
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/OCR/onnx_PP_OCRv4/model/cls.onnx
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/OCR/onnx_PP_OCRv4/model/PP-OCRv4_det.onnx
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/OCR/onnx_PP_OCRv4/model/rec.onnx
+```
+
+
+
+### 项目结构
+
+```txt
+├── cfg
+├── cls.cix
+├── datasets
+├── inference_npu.py
+├── inference_onnx.py
+├── model
+├── ppocr_keys_v1.txt
+├── pp_ocr.py
+├── PP-OCRv4_det.cix
+├── ReadMe.md
+├── rec.cix
+├── simfang.ttf
+└── test_data
+```
+
+### 进行模型量化和转换
+
+#### 转换 Detection 检测模块
+
+
+
+```bash
+cd ..
+cixbuild cfg/detbuild.cfg
+```
+
+
+
+#### 转换 Classification 分类模块
+
+
+
+```bash
+cixbuild cfg/clsbuild.cfg
+```
+
+
+
+#### 转换 Recognition 识别模块
+
+
+
+```bash
+cixbuild cfg/recbuild.cfg
+```
+
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
+:::
+
+### 测试主机推理
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_onnx.py
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_onnx.py
+[[[36.0, 409.0], [486.0, 386.0], [489.0, 434.0], [38.0, 457.0]], ('上海斯格威铂尔大酒店', 0.9942322969436646)]
+[[[183.0, 453.0], [401.0, 444.0], [403.0, 485.0], [185.0, 494.0]], ('打浦路15号', 0.9480939507484436)]
+[[[14.0, 501.0], [519.0, 483.0], [521.0, 537.0], [15.0, 555.0]], ('绿洲仕格维花园公寓', 0.9961597919464111)]
+[[[73.0, 550.0], [451.0, 539.0], [452.0, 576.0], [74.0, 587.0]], ('打浦路2529 35号→', 0.9754183292388916)]
+[[[292.0, 295.0], [335.0, 294.0], [350.0, 852.0], [307.0, 853.0]], ('土地整治与土壤修复研究中心', 0.9570525288581848)]
+[[[343.0, 298.0], [380.0, 297.0], [389.0, 665.0], [352.0, 666.0]], ('华南农业大学-东图', 0.9861757755279541)]
+[[[34.0, 79.0], [440.0, 82.0], [439.0, 174.0], [33.0, 171.0]], ('纯臻营养护发素', 0.9949513673782349)]
+[[[31.0, 183.0], [253.0, 183.0], [253.0, 243.0], [31.0, 243.0]], ('产品信息/参数', 0.9937998652458191)]
+[[[39.0, 258.0], [469.0, 258.0], [469.0, 309.0], [39.0, 309.0]], ('(45元/每公斤,100公斤起订', 0.9810954928398132)]
+[[[35.0, 325.0], [410.0, 327.0], [409.0, 382.0], [34.0, 380.0]], ('每瓶22元,1000瓶起订)', 0.999457061290741)]
+[[[34.0, 406.0], [435.0, 406.0], [435.0, 454.0], [34.0, 454.0]], ('【品牌】:代加工方式/OEMODM', 0.9994476437568665)]
+[[[32.0, 477.0], [341.0, 474.0], [341.0, 526.0], [32.0, 528.0]], ('【品名】:纯臻营养护发素', 0.9984829425811768)]
+[[[32.0, 549.0], [353.0, 549.0], [353.0, 600.0], [32.0, 600.0]], ('【产品编号】:YM-X-3011', 0.9997670650482178)]
+[[[30.0, 621.0], [263.0, 617.0], [264.0, 668.0], [31.0, 672.0]], ('【净含量】:220ml', 0.9565265774726868)]
+[[[33.0, 692.0], [365.0, 695.0], [364.0, 743.0], [33.0, 740.0]], ('【适用人群】:适合所有肤质', 0.9993946552276611)]
+[[[32.0, 763.0], [499.0, 766.0], [498.0, 816.0], [32.0, 813.0]], ('【主要成分】:鲸蜡硬脂醇、燕麦β-葡聚', 0.9533663392066956)]
+[[[38.0, 840.0], [407.0, 840.0], [407.0, 884.0], [38.0, 884.0]], ('糖、椰油酰胺丙基甜菜碱、泛醒', 0.9451590776443481)]
+[[[525.0, 842.0], [690.0, 842.0], [690.0, 898.0], [525.0, 898.0]], ('(成品包材)', 0.9980840682983398)]
+[[[34.0, 910.0], [522.0, 910.0], [522.0, 957.0], [34.0, 957.0]], ('【主要功能】:可紧致头发磷层,从而达到', 0.9985333681106567)]
+[[[39.0, 983.0], [536.0, 983.0], [536.0, 1027.0], [39.0, 1027.0]], ('即时持久改善头发光泽的效果,给干燥的头', 0.9993751645088196)]
+[[[32.0, 1051.0], [201.0, 1048.0], [202.0, 1104.0], [33.0, 1107.0]], ('发足够的滋养', 0.9753393530845642)]
+```
+
+
+
+
+
+### 进行 NPU 部署
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+#### 模型运行结果
+
+
+
+```bash
+$ python3 inference_npu.py
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+[[[36.0, 409.0], [486.0, 386.0], [489.0, 434.0], [38.0, 457.0]], ('上海斯格威铂尔大酒店', 0.9929969906806946)]
+[[[141.0, 456.0], [403.0, 444.0], [404.0, 483.0], [143.0, 495.0]], ('←打浦路15号', 0.862202525138855)]
+[[[17.0, 505.0], [519.0, 484.0], [521.0, 535.0], [19.0, 555.0]], ('绿洲仕格维花园公寓', 0.9960622787475586)]
+[[[67.0, 550.0], [418.0, 539.0], [420.0, 578.0], [68.0, 590.0]], ('打浦路25 29 35号', 0.9729113578796387)]
+[[[34.0, 78.0], [442.0, 80.0], [441.0, 174.0], [33.0, 171.0]], ('纯臻营养护发素', 0.9860424399375916)]
+[[[30.0, 181.0], [255.0, 181.0], [255.0, 244.0], [30.0, 244.0]], ('产品信息/参数', 0.949313759803772)]
+[[[39.0, 258.0], [478.0, 258.0], [478.0, 309.0], [39.0, 309.0]], ('(45元/每公斤、100公斤起订)', 0.9828777313232422)]
+[[[36.0, 321.0], [411.0, 325.0], [411.0, 384.0], [35.0, 380.0]], ('每瓶22元、1000瓶起订)', 0.9913207292556763)]
+[[[37.0, 406.0], [432.0, 406.0], [432.0, 450.0], [37.0, 450.0]], ('【品牌】:代加工方式/OEM ODM', 0.9849441051483154)]
+[[[31.0, 475.0], [342.0, 472.0], [342.0, 527.0], [31.0, 530.0]], ('【品名】:纯臻营养护发素', 0.9962107539176941)]
+[[[593.0, 539.0], [623.0, 539.0], [623.0, 700.0], [593.0, 700.0]], ('ODM OEM', 0.9357462525367737)]
+[[[31.0, 549.0], [353.0, 546.0], [353.0, 599.0], [31.0, 601.0]], ('【产品编号】:YM-X-3011', 0.9970366358757019)]
+[[[29.0, 620.0], [264.0, 617.0], [264.0, 668.0], [30.0, 671.0]], ('【净含量】:220ml', 0.9971547722816467)]
+[[[33.0, 691.0], [367.0, 694.0], [367.0, 742.0], [33.0, 739.0]], ('【适用人君】:适合所有肤质', 0.9611490964889526)]
+[[[33.0, 764.0], [497.0, 767.0], [497.0, 813.0], [33.0, 811.0]], ('【主要成分】:鲸蜡硬脂醇、燕麦-葡聚', 0.9434943795204163)]
+[[[37.0, 839.0], [409.0, 839.0], [409.0, 886.0], [37.0, 886.0]], ('糖、椰油酰胺丙基甜菜碱、泛醒', 0.9171066880226135)]
+[[[526.0, 843.0], [689.0, 843.0], [689.0, 896.0], [526.0, 896.0]], ('(成品色材)', 0.8261211514472961)]
+[[[33.0, 908.0], [522.0, 910.0], [522.0, 957.0], [33.0, 955.0]], ('【主要功能】:可紧致头发磷层,从而达到', 0.9950319528579712)]
+[[[39.0, 983.0], [536.0, 983.0], [536.0, 1027.0], [39.0, 1027.0]], ('即时持久改善头发光泽的效果,给干燥的头', 0.9946616291999817)]
+[[[34.0, 1051.0], [201.0, 1051.0], [201.0, 1103.0], [34.0, 1103.0]], ('发定够的滋养', 0.9353836178779602)]
+[[[292.0, 297.0], [335.0, 295.0], [350.0, 850.0], [307.0, 851.0]], ('土地整治与土壤修复研究中心', 0.976573646068573)]
+[[[344.0, 299.0], [381.0, 298.0], [387.0, 662.0], [351.0, 663.0]], ('华南农业大学-东图', 0.9912211298942566)]
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+```
+
+
+
+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_qwen2-5-vl-3b.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_qwen2-5-vl-3b.mdx
new file mode 100644
index 000000000..c1e1dadaa
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_qwen2-5-vl-3b.mdx
@@ -0,0 +1,317 @@
+**Qwen2.5-VL** 是由阿里云通义千问团队研发的多模态视觉语言模型系列。
+该系列在继承前代优势的基础上,进一步强化了对动态视频的深度理解、超长文档的精确解析以及复杂场景下的逻辑推理能力,致力于提供更具通用性的视觉交互体验。
+
+- 核心特点:系列模型具备卓越的视觉感知与对齐能力,能够处理高分辨率图像及长达 1 小时以上的视频输入。
+ 其突出优势在于“视觉智能体”(Visual Agent)能力的增强,支持精准的坐标感知、UI 界面交互以及复杂的结构化数据提取,在自动化任务处理、多模态搜索及高精度视觉问答中展现出强大性能。
+- 版本说明:本模型 Qwen2.5-VL-3B-Instruct 是该系列的中量化实践版本,参数量约为 30 亿,并经过严格的指令微调。
+ 它在模型性能与计算开销之间取得了优异平衡,既保留了强大的多模态推理能力,又兼顾了部署的灵活性,广泛适用于端侧设备、实时交互应用及各类低资源开发环境。
+
+## 环境配置
+
+参考 [llama.cpp](../../../../orion/o6/app-development/artificial-intelligence/llama_cpp.md) 文档准备好 llama.cpp 工具。
+
+## 快速开始
+
+### 下载模型
+
+
+
+```bash
+pip3 install modelscope
+cd llama.cpp
+modelscope download --model radxa/Qwen2.5-VL-3B-Instruct-NOE mmproj-Qwen2.5-VL-3b-Instruct-F16.gguf --local_dir ./
+modelscope download --model radxa/Qwen2.5-VL-3B-Instruct-NOE Qwen2.5-VL-3B-Instruct-Q5_K_M.gguf --local_dir ./
+modelscope download --model radxa/Qwen2.5-VL-3B-Instruct-NOE test.png --local_dir ./
+```
+
+
+
+### 运行模型
+
+
+
+```bash
+./build/bin/llama-mtmd-cli -m ./Qwen2.5-VL-3B-Instruct-Q5_K_M.gguf --mmproj ./mmproj-Qwen2.5-VL-3b-Instruct-F16.gguf -p "Describe this image." --image ./test.png
+```
+
+
+
+## 完整转换流程
+
+### 克隆模型仓库
+
+
+
+```bash
+cd llama.cpp
+git clone https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct
+```
+
+
+
+### 创建虚拟环境
+
+
+
+```bash
+python3 -m venv .venv
+source .venv/bin/activate
+pip3 install -r requirements.txt
+```
+
+
+
+### 模型转换
+
+#### 转换文本模块
+
+
+
+```bash
+python3 ./convert_hf_to_gguf.py ./Qwen2.5-VL-3B-Instruct
+```
+
+
+
+#### 转换视觉模块
+
+
+
+```bash
+python3 ./convert_hf_to_gguf.py --mmproj ./Qwen2.5-VL-3B-Instruct
+```
+
+
+
+### 模型量化
+
+这里采用 Q5_K_M 量化。
+
+
+
+```bash
+./build/bin/llama-quantize ./Qwen2.5-VL-3B-Instruct/Qwen2.5-VL-3B-Instruct-F16.gguf ./Qwen2.5-VL-3B-Instruct/Qwen2.5-VL-3B-Instruct-Q5_K_M.gguf Q5_K_M
+```
+
+
+
+### 模型测试
+
+
+

+
+ 模型测试输入
+
+
+
+
+
+```bash
+./build/bin/llama-mtmd-cli -m ./Qwen2.5-VL-3B-Instruct/Qwen2.5-VL-3B-Instruct-Q5_K_M.gguf --mmproj ./Qwen2.5-VL-3B-Instruct/mmproj-Qwen2.5-VL-3b-Instruct-F16.gguf -p "Describe this image." --image ./Qwen2.5-VL-3B-Instruct/test.png
+```
+
+
+
+模型输出:
+
+```bash
+$ ./build/bin/llama-mtmd-cli -m ./Qwen2.5-VL-3B-Instruct/Qwen2.5-VL-3B-Instruct-Q5_K_M.gguf --mmproj ./Qwen2.5-VL-3B-Instruct/mmproj-Qwen2.5-VL-3b-Instruct-F16.gguf -p "Describe this image." --image ./Qwen2.5-VL-3B-Instruct/test.png
+build: 7110 (3ae282a06) with cc (Debian 12.2.0-14+deb12u1) 12.2.0 for aarch64-linux-gnu
+llama_model_loader: loaded meta data with 27 key-value pairs and 434 tensors from ./Qwen2.5-VL-3B-Instruct/Qwen2.5-VL-3B-Instruct-Q5_K_M.gguf (version GGUF V3 (latest))
+llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
+llama_model_loader: - kv 0: general.architecture str = qwen2vl
+llama_model_loader: - kv 1: general.type str = model
+llama_model_loader: - kv 2: general.name str = Qwen2.5 VL 3B Instruct
+llama_model_loader: - kv 3: general.finetune str = Instruct
+llama_model_loader: - kv 4: general.basename str = Qwen2.5-VL
+llama_model_loader: - kv 5: general.size_label str = 3B
+llama_model_loader: - kv 6: qwen2vl.block_count u32 = 36
+llama_model_loader: - kv 7: qwen2vl.context_length u32 = 128000
+llama_model_loader: - kv 8: qwen2vl.embedding_length u32 = 2048
+llama_model_loader: - kv 9: qwen2vl.feed_forward_length u32 = 11008
+llama_model_loader: - kv 10: qwen2vl.attention.head_count u32 = 16
+llama_model_loader: - kv 11: qwen2vl.attention.head_count_kv u32 = 2
+llama_model_loader: - kv 12: qwen2vl.rope.freq_base f32 = 1000000.000000
+llama_model_loader: - kv 13: qwen2vl.attention.layer_norm_rms_epsilon f32 = 0.000001
+llama_model_loader: - kv 14: qwen2vl.rope.dimension_sections arr[i32,4] = [16, 24, 24, 0]
+llama_model_loader: - kv 15: tokenizer.ggml.model str = gpt2
+llama_model_loader: - kv 16: tokenizer.ggml.pre str = qwen2
+llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ...
+llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
+llama_model_loader: - kv 19: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
+llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 151645
+llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 151643
+llama_model_loader: - kv 22: tokenizer.ggml.bos_token_id u32 = 151643
+llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = false
+llama_model_loader: - kv 24: tokenizer.chat_template str = {% set image_count = namespace(value=...
+llama_model_loader: - kv 25: general.quantization_version u32 = 2
+llama_model_loader: - kv 26: general.file_type u32 = 17
+llama_model_loader: - type f32: 181 tensors
+llama_model_loader: - type q5_K: 216 tensors
+llama_model_loader: - type q6_K: 37 tensors
+print_info: file format = GGUF V3 (latest)
+print_info: file type = Q5_K - Medium
+print_info: file size = 2.07 GiB (5.75 BPW)
+load: printing all EOG tokens:
+load: - 151643 ('<|endoftext|>')
+load: - 151645 ('<|im_end|>')
+load: - 151662 ('<|fim_pad|>')
+load: - 151663 ('<|repo_name|>')
+load: - 151664 ('<|file_sep|>')
+load: special tokens cache size = 22
+load: token to piece cache size = 0.9310 MB
+print_info: arch = qwen2vl
+print_info: vocab_only = 0
+print_info: n_ctx_train = 128000
+print_info: n_embd = 2048
+print_info: n_embd_inp = 2048
+print_info: n_layer = 36
+print_info: n_head = 16
+print_info: n_head_kv = 2
+print_info: n_rot = 128
+print_info: n_swa = 0
+print_info: is_swa_any = 0
+print_info: n_embd_head_k = 128
+print_info: n_embd_head_v = 128
+print_info: n_gqa = 8
+print_info: n_embd_k_gqa = 256
+print_info: n_embd_v_gqa = 256
+print_info: f_norm_eps = 0.0e+00
+print_info: f_norm_rms_eps = 1.0e-06
+print_info: f_clamp_kqv = 0.0e+00
+print_info: f_max_alibi_bias = 0.0e+00
+print_info: f_logit_scale = 0.0e+00
+print_info: f_attn_scale = 0.0e+00
+print_info: n_ff = 11008
+print_info: n_expert = 0
+print_info: n_expert_used = 0
+print_info: n_expert_groups = 0
+print_info: n_group_used = 0
+print_info: causal attn = 1
+print_info: pooling type = -1
+print_info: rope type = 8
+print_info: rope scaling = linear
+print_info: freq_base_train = 1000000.0
+print_info: freq_scale_train = 1
+print_info: n_ctx_orig_yarn = 128000
+print_info: rope_finetuned = unknown
+print_info: mrope sections = [16, 24, 24, 0]
+print_info: model type = 3B
+print_info: model params = 3.09 B
+print_info: general.name = Qwen2.5 VL 3B Instruct
+print_info: vocab type = BPE
+print_info: n_vocab = 151936
+print_info: n_merges = 151387
+print_info: BOS token = 151643 '<|endoftext|>'
+print_info: EOS token = 151645 '<|im_end|>'
+print_info: EOT token = 151645 '<|im_end|>'
+print_info: PAD token = 151643 '<|endoftext|>'
+print_info: LF token = 198 'Ċ'
+print_info: FIM PRE token = 151659 '<|fim_prefix|>'
+print_info: FIM SUF token = 151661 '<|fim_suffix|>'
+print_info: FIM MID token = 151660 '<|fim_middle|>'
+print_info: FIM PAD token = 151662 '<|fim_pad|>'
+print_info: FIM REP token = 151663 '<|repo_name|>'
+print_info: FIM SEP token = 151664 '<|file_sep|>'
+print_info: EOG token = 151643 '<|endoftext|>'
+print_info: EOG token = 151645 '<|im_end|>'
+print_info: EOG token = 151662 '<|fim_pad|>'
+print_info: EOG token = 151663 '<|repo_name|>'
+print_info: EOG token = 151664 '<|file_sep|>'
+print_info: max token length = 256
+load_tensors: loading model tensors, this can take a while... (mmap = true)
+load_tensors: CPU_Mapped model buffer size = 2116.07 MiB
+..........................................................................................
+llama_context: constructing llama_context
+llama_context: n_seq_max = 1
+llama_context: n_ctx = 4096
+llama_context: n_ctx_seq = 4096
+llama_context: n_batch = 2048
+llama_context: n_ubatch = 512
+llama_context: causal_attn = 1
+llama_context: flash_attn = auto
+llama_context: kv_unified = false
+llama_context: freq_base = 1000000.0
+llama_context: freq_scale = 1
+llama_context: n_ctx_seq (4096) < n_ctx_train (128000) -- the full capacity of the model will not be utilized
+llama_context: CPU output buffer size = 0.58 MiB
+llama_kv_cache: CPU KV buffer size = 144.00 MiB
+llama_kv_cache: size = 144.00 MiB ( 4096 cells, 36 layers, 1/1 seqs), K (f16): 72.00 MiB, V (f16): 72.00 MiB
+llama_context: Flash Attention was auto, set to enabled
+llama_context: CPU compute buffer size = 304.75 MiB
+llama_context: graph nodes = 1231
+llama_context: graph splits = 1
+common_init_from_params: added <|endoftext|> logit bias = -inf
+common_init_from_params: added <|im_end|> logit bias = -inf
+common_init_from_params: added <|fim_pad|> logit bias = -inf
+common_init_from_params: added <|repo_name|> logit bias = -inf
+common_init_from_params: added <|file_sep|> logit bias = -inf
+common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
+common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
+mtmd_cli_context: chat template example:
+<|im_start|>system
+You are a helpful assistant<|im_end|>
+<|im_start|>user
+Hello<|im_end|>
+<|im_start|>assistant
+Hi there<|im_end|>
+<|im_start|>user
+How are you?<|im_end|>
+<|im_start|>assistant
+
+clip_model_loader: model name: Qwen2.5 VL 3B Instruct
+clip_model_loader: description:
+clip_model_loader: GGUF version: 3
+clip_model_loader: alignment: 32
+clip_model_loader: n_tensors: 519
+clip_model_loader: n_kv: 22
+
+clip_model_loader: has vision encoder
+clip_ctx: CLIP using CPU backend
+load_hparams: Qwen-VL models require at minimum 1024 image tokens to function correctly on grounding tasks
+load_hparams: if you encounter problems with accuracy, try adding --image-min-tokens 1024
+load_hparams: more info: https://github.com/ggml-org/llama.cpp/issues/16842
+
+load_hparams: projector: qwen2.5vl_merger
+load_hparams: n_embd: 1280
+load_hparams: n_head: 16
+load_hparams: n_ff: 3420
+load_hparams: n_layer: 32
+load_hparams: ffn_op: silu
+load_hparams: projection_dim: 2048
+
+--- vision hparams ---
+load_hparams: image_size: 560
+load_hparams: patch_size: 14
+load_hparams: has_llava_proj: 0
+load_hparams: minicpmv_version: 0
+load_hparams: n_merge: 2
+load_hparams: n_wa_pattern: 8
+load_hparams: image_min_pixels: 6272
+load_hparams: image_max_pixels: 3211264
+
+load_hparams: model size: 1276.39 MiB
+load_hparams: metadata size: 0.18 MiB
+alloc_compute_meta: warmup with image size = 1288 x 1288
+alloc_compute_meta: CPU compute buffer size = 732.56 MiB
+alloc_compute_meta: graph splits = 1, nodes = 1092
+warmup: flash attention is enabled
+main: loading model: ./Qwen2.5-VL-3B-Instruct/Qwen2.5-VL-3B-Instruct-Q5_K_M.gguf
+encoding image slice...
+image slice encoded in 8425 ms
+decoding image batch 1/1, n_tokens_batch = 361
+image decoded (batch 1/1) in 13109 ms
+
+The image depicts a single, delicate rose with a soft pink hue, resting on a dark, possibly marble, surface. The rose is positioned near a window, which has a dark frame. The window appears to be letting in some light, creating a contrast between the illuminated rose and the darker surroundings. The overall scene has a serene and somewhat melancholic atmosphere, with the rose being the central focus.
+
+
+llama_perf_context_print: load time = 497.68 ms
+llama_perf_context_print: prompt eval time = 22189.23 ms / 375 tokens ( 59.17 ms per token, 16.90 tokens per second)
+llama_perf_context_print: eval time = 9434.97 ms / 80 runs ( 117.94 ms per token, 8.48 tokens per second)
+llama_perf_context_print: total time = 31913.30 ms / 455 tokens
+llama_perf_context_print: graphs reused = 0
+```
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_qwen2vl-2b.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_qwen2vl-2b.mdx
new file mode 100644
index 000000000..a8969eaa2
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_qwen2vl-2b.mdx
@@ -0,0 +1,308 @@
+**Qwen2-VL** 是由阿里云通义千问团队研发的开源多模态视觉语言模型系列。
+该系列模型通过统一的视觉编码器与大语言模型基座深度融合,旨在实现强大的图像理解、细粒度推理与开放世界对话能力。
+
+- 核心特点:系列模型普遍具备高效的视觉语义对齐能力,支持对图像内容的精准描述、复杂问答、逻辑推理以及多轮交互。
+ 其架构设计兼顾了性能与效率,在文档分析、智能助理、多模态搜索等场景中有广泛的应用潜力。
+- 版本说明:本模型 Qwen2-VL-2B-Instruct 是该系列的轻量化实践版本,参数量约为20亿,并经过指令微调优化,适合在端侧与低资源环境中部署,实现实时多模态交互。
+
+## 环境配置
+
+参考 [llama.cpp](../../../../orion/o6/app-development/artificial-intelligence/llama_cpp.md) 文档准备好 llama.cpp 工具。
+
+## 快速开始
+
+### 下载模型
+
+
+
+```bash
+pip3 install modelscope
+cd llama.cpp
+modelscope download --model radxa/Qwen2-VL-2B-Instruct-NOE mmproj-Qwen2-VL-2b-Instruct-F16.gguf --local_dir ./
+modelscope download --model radxa/Qwen2-VL-2B-Instruct-NOE Qwen2-VL-2B-Instruct-Q5_K_M.gguf --local_dir ./
+modelscope download --model radxa/Qwen2-VL-2B-Instruct-NOE test.png --local_dir ./
+```
+
+
+
+### 运行模型
+
+
+
+```bash
+./build/bin/llama-mtmd-cli -m ./Qwen2-VL-2B-Instruct-Q5_K_M.gguf --mmproj ./mmproj-Qwen2-VL-2b-Instruct-F16.gguf -p "Describe this image." --image ./test.png
+```
+
+
+
+## 完整转换流程
+
+### 克隆模型仓库
+
+
+
+```bash
+cd llama.cpp
+git clone https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct
+```
+
+
+
+### 创建虚拟环境
+
+
+
+```bash
+python3 -m venv .venv
+source .venv/bin/activate
+pip3 install -r requirements.txt
+```
+
+
+
+### 模型转换
+
+#### 转换文本模块
+
+
+
+```bash
+python3 ./convert_hf_to_gguf.py ./Qwen2-VL-2B-Instruct
+```
+
+
+
+#### 转换视觉模块
+
+
+
+```bash
+python3 ./convert_hf_to_gguf.py --mmproj ./Qwen2-VL-2B-Instruct
+```
+
+
+
+### 模型量化
+
+这里采用 Q5_K_M 量化。
+
+
+
+```bash
+./build/bin/llama-quantize ./Qwen2-VL-2B-Instruct/Qwen2-VL-2B-Instruct-F16.gguf ./Qwen2-VL-2B-Instruct/Qwen2-VL-2B-Instruct-Q5_K_M.gguf Q5_K_M
+```
+
+
+
+### 模型测试
+
+
+

+
+ 模型测试输入
+
+
+
+
+
+```bash
+./build/bin/llama-mtmd-cli -m ./Qwen2-VL-2B-Instruct/Qwen2-VL-2B-Instruct-Q5_K_M.gguf --mmproj ./Qwen2-VL-2B-Instruct/mmproj-Qwen2-VL-2b-Instruct-F16.gguf -p "Describe this image." --image ./Qwen2-VL-2B-Instruct/test.png
+```
+
+
+
+模型输出:
+
+```bash
+$ ./build/bin/llama-mtmd-cli -m ./Qwen2-VL-2B-Instruct/Qwen2-VL-2B-Instruct-Q5_K_M.gguf --mmproj ./Qwen2-VL-2B-Instruct/mmproj-Qwen
+2-VL-2b-Instruct-F16.gguf -p "Describe this image." --image ./Qwen2-VL-2B-Instruct/test.png
+build: 7110 (3ae282a06) with cc (Debian 12.2.0-14+deb12u1) 12.2.0 for aarch64-linux-gnu
+llama_model_loader: loaded meta data with 33 key-value pairs and 338 tensors from ./Qwen2-VL-2B-Instruct/Qwen2-VL-2B-Instruct-Q5_K_M.gguf (version GGUF V3 (latest))
+llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
+llama_model_loader: - kv 0: general.architecture str = qwen2vl
+llama_model_loader: - kv 1: general.type str = model
+llama_model_loader: - kv 2: general.name str = Qwen2 VL 2B Instruct
+llama_model_loader: - kv 3: general.finetune str = Instruct
+llama_model_loader: - kv 4: general.basename str = Qwen2-VL
+llama_model_loader: - kv 5: general.size_label str = 2B
+llama_model_loader: - kv 6: general.license str = apache-2.0
+llama_model_loader: - kv 7: general.base_model.count u32 = 1
+llama_model_loader: - kv 8: general.base_model.0.name str = Qwen2 VL 2B
+llama_model_loader: - kv 9: general.base_model.0.organization str = Qwen
+llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2-VL-2B
+llama_model_loader: - kv 11: general.tags arr[str,2] = ["multimodal", "image-text-to-text"]
+llama_model_loader: - kv 12: general.languages arr[str,1] = ["en"]
+llama_model_loader: - kv 13: qwen2vl.block_count u32 = 28
+llama_model_loader: - kv 14: qwen2vl.context_length u32 = 32768
+llama_model_loader: - kv 15: qwen2vl.embedding_length u32 = 1536
+llama_model_loader: - kv 16: qwen2vl.feed_forward_length u32 = 8960
+llama_model_loader: - kv 17: qwen2vl.attention.head_count u32 = 12
+llama_model_loader: - kv 18: qwen2vl.attention.head_count_kv u32 = 2
+llama_model_loader: - kv 19: qwen2vl.rope.freq_base f32 = 1000000.000000
+llama_model_loader: - kv 20: qwen2vl.attention.layer_norm_rms_epsilon f32 = 0.000001
+llama_model_loader: - kv 21: qwen2vl.rope.dimension_sections arr[i32,4] = [16, 24, 24, 0]
+llama_model_loader: - kv 22: tokenizer.ggml.model str = gpt2
+llama_model_loader: - kv 23: tokenizer.ggml.pre str = qwen2
+llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ...
+llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
+llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
+llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 151645
+llama_model_loader: - kv 28: tokenizer.ggml.padding_token_id u32 = 151643
+llama_model_loader: - kv 29: tokenizer.ggml.bos_token_id u32 = 151643
+llama_model_loader: - kv 30: tokenizer.chat_template str = {% set image_count = namespace(value=...
+llama_model_loader: - kv 31: general.quantization_version u32 = 2
+llama_model_loader: - kv 32: general.file_type u32 = 17
+llama_model_loader: - type f32: 141 tensors
+llama_model_loader: - type q5_K: 168 tensors
+llama_model_loader: - type q6_K: 29 tensors
+print_info: file format = GGUF V3 (latest)
+print_info: file type = Q5_K - Medium
+print_info: file size = 1.04 GiB (5.80 BPW)
+load: printing all EOG tokens:
+load: - 151643 ('<|endoftext|>')
+load: - 151645 ('<|im_end|>')
+load: special tokens cache size = 14
+load: token to piece cache size = 0.9309 MB
+print_info: arch = qwen2vl
+print_info: vocab_only = 0
+print_info: n_ctx_train = 32768
+print_info: n_embd = 1536
+print_info: n_embd_inp = 1536
+print_info: n_layer = 28
+print_info: n_head = 12
+print_info: n_head_kv = 2
+print_info: n_rot = 128
+print_info: n_swa = 0
+print_info: is_swa_any = 0
+print_info: n_embd_head_k = 128
+print_info: n_embd_head_v = 128
+print_info: n_gqa = 6
+print_info: n_embd_k_gqa = 256
+print_info: n_embd_v_gqa = 256
+print_info: f_norm_eps = 0.0e+00
+print_info: f_norm_rms_eps = 1.0e-06
+print_info: f_clamp_kqv = 0.0e+00
+print_info: f_max_alibi_bias = 0.0e+00
+print_info: f_logit_scale = 0.0e+00
+print_info: f_attn_scale = 0.0e+00
+print_info: n_ff = 8960
+print_info: n_expert = 0
+print_info: n_expert_used = 0
+print_info: n_expert_groups = 0
+print_info: n_group_used = 0
+print_info: causal attn = 1
+print_info: pooling type = -1
+print_info: rope type = 8
+print_info: rope scaling = linear
+print_info: freq_base_train = 1000000.0
+print_info: freq_scale_train = 1
+print_info: n_ctx_orig_yarn = 32768
+print_info: rope_finetuned = unknown
+print_info: mrope sections = [16, 24, 24, 0]
+print_info: model type = 1.5B
+print_info: model params = 1.54 B
+print_info: general.name = Qwen2 VL 2B Instruct
+print_info: vocab type = BPE
+print_info: n_vocab = 151936
+print_info: n_merges = 151387
+print_info: BOS token = 151643 '<|endoftext|>'
+print_info: EOS token = 151645 '<|im_end|>'
+print_info: EOT token = 151645 '<|im_end|>'
+print_info: PAD token = 151643 '<|endoftext|>'
+print_info: LF token = 198 'Ċ'
+print_info: EOG token = 151643 '<|endoftext|>'
+print_info: EOG token = 151645 '<|im_end|>'
+print_info: max token length = 256
+load_tensors: loading model tensors, this can take a while... (mmap = true)
+load_tensors: CPU_Mapped model buffer size = 1067.26 MiB
+....................................................................................
+llama_context: constructing llama_context
+llama_context: n_seq_max = 1
+llama_context: n_ctx = 4096
+llama_context: n_ctx_seq = 4096
+llama_context: n_batch = 2048
+llama_context: n_ubatch = 512
+llama_context: causal_attn = 1
+llama_context: flash_attn = auto
+llama_context: kv_unified = false
+llama_context: freq_base = 1000000.0
+llama_context: freq_scale = 1
+llama_context: n_ctx_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
+llama_context: CPU output buffer size = 0.58 MiB
+llama_kv_cache: CPU KV buffer size = 112.00 MiB
+llama_kv_cache: size = 112.00 MiB ( 4096 cells, 28 layers, 1/1 seqs), K (f16): 56.00 MiB, V (f16): 56.00 MiB
+llama_context: Flash Attention was auto, set to enabled
+llama_context: CPU compute buffer size = 302.75 MiB
+llama_context: graph nodes = 959
+llama_context: graph splits = 1
+common_init_from_params: added <|endoftext|> logit bias = -inf
+common_init_from_params: added <|im_end|> logit bias = -inf
+common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
+common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
+mtmd_cli_context: chat template example:
+<|im_start|>system
+You are a helpful assistant<|im_end|>
+<|im_start|>user
+Hello<|im_end|>
+<|im_start|>assistant
+Hi there<|im_end|>
+<|im_start|>user
+How are you?<|im_end|>
+<|im_start|>assistant
+
+clip_model_loader: model name: Qwen2 VL 2B Instruct
+clip_model_loader: description:
+clip_model_loader: GGUF version: 3
+clip_model_loader: alignment: 32
+clip_model_loader: n_tensors: 520
+clip_model_loader: n_kv: 27
+
+clip_model_loader: has vision encoder
+clip_ctx: CLIP using CPU backend
+load_hparams: Qwen-VL models require at minimum 1024 image tokens to function correctly on grounding tasks
+load_hparams: if you encounter problems with accuracy, try adding --image-min-tokens 1024
+load_hparams: more info: https://github.com/ggml-org/llama.cpp/issues/16842
+
+load_hparams: projector: qwen2vl_merger
+load_hparams: n_embd: 1280
+load_hparams: n_head: 16
+load_hparams: n_ff: 1536
+load_hparams: n_layer: 32
+load_hparams: ffn_op: gelu_quick
+load_hparams: projection_dim: 1536
+
+--- vision hparams ---
+load_hparams: image_size: 560
+load_hparams: patch_size: 14
+load_hparams: has_llava_proj: 0
+load_hparams: minicpmv_version: 0
+load_hparams: n_merge: 2
+load_hparams: n_wa_pattern: 0
+load_hparams: image_min_pixels: 6272
+load_hparams: image_max_pixels: 3211264
+
+load_hparams: model size: 1269.94 MiB
+load_hparams: metadata size: 0.18 MiB
+alloc_compute_meta: warmup with image size = 1288 x 1288
+alloc_compute_meta: CPU compute buffer size = 267.08 MiB
+alloc_compute_meta: graph splits = 1, nodes = 1085
+warmup: flash attention is enabled
+main: loading model: ./Qwen2-VL-2B-Instruct/Qwen2-VL-2B-Instruct-Q5_K_M.gguf
+encoding image slice...
+image slice encoded in 11683 ms
+decoding image batch 1/1, n_tokens_batch = 361
+image decoded (batch 1/1) in 6250 ms
+
+The image depicts a single rose placed on a marble surface, likely a table or a shelf. The rose is positioned in such a way that it is slightly tilted, with its petals facing upwards. The background features a dark, possibly stone or marble, wall with a textured surface, and a window or mirror reflecting the surroundings. The overall composition of the image creates a serene and elegant atmosphere.
+
+
+llama_perf_context_print: load time = 416.66 ms
+llama_perf_context_print: prompt eval time = 18253.30 ms / 375 tokens ( 48.68 ms per token, 20.54 tokens per second)
+llama_perf_context_print: eval time = 5283.83 ms / 78 runs ( 67.74 ms per token, 14.76 tokens per second)
+llama_perf_context_print: total time = 23892.18 ms / 453 tokens
+llama_perf_context_print: graphs reused = 0
+```
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_real-esrgan.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_real-esrgan.mdx
new file mode 100644
index 000000000..5dbf5b9b6
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_real-esrgan.mdx
@@ -0,0 +1,141 @@
+**Real-ESRGAN** 是由腾讯 ARC 实验室提出的一种旨在修复现实世界复杂退化图像的超分辨率算法。它通过改进传统生成对抗网络(GAN)的训练方式,利用二阶退化模型模拟真实图像中的模糊、噪声及压缩伪影,从而实现对低质量图像的自然重构。
+
+- 核心特点:支持极高质量的图像细节增强与伪影消除,能够显著提升低分辨率图片的清晰度并还原纹理质感,广泛应用于老照片修复、视频增强、动漫高清化以及安防影像分析等领域。
+- 版本说明:本案例采用 Real-ESRGAN_x4plus 模型。作为该家族中泛化能力最强的版本,它专门针对现实世界中的各种未知退化进行优化。在保持经典 RRDB 基础架构的同时,它通过更深层的特征提取能力,在图像清晰度与视觉真实感之间取得了卓越的平衡,是目前处理通用图像放大与修复任务的首选方案。
+
+:::info[环境配置]
+需要提前配置好相关环境。
+
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
+
+## 快速开始
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Super_Resolution/onnx_real_esrgan
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Super_Resolution/onnx_real_esrgan/real_esrgan.cix
+```
+
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Super_Resolution/onnx_real_esrgan/model
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Super_Resolution/onnx_real_esrgan/model/realesrgan-x4.onnx
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Super_Resolution/onnx_real_esrgan/model/RealESRGAN_x4plus.pth
+```
+
+
+
+### 项目结构
+
+```txt
+├── cfg
+├── datasets
+├── inference_npu.py
+├── inference_onnx.py
+├── model
+├── pytorch2onnx_x4.py
+├── README.md
+├── real_esrgan.cix
+└── test_data
+```
+
+### 进行模型量化和转换
+
+
+
+```bash
+cd ..
+cixbuild cfg/onnx_realesrganbuild.cfg
+```
+
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
+:::
+
+### 测试主机推理
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_onnx.py
+```
+
+
+
+#### 模型推理结果
+
+
+
+{" "}
+
+

+
+
+
+### 进行 NPU 部署
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+#### 模型运行结果
+
+
+
+```bash
+$ python3 inference_npu.py
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+```
+
+
+
+
+
+{" "}
+
+

+
+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_resnet50.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_resnet50.mdx
index 2102195fa..02159a537 100644
--- a/docs/common/orion-common/app-dev/artificial-intelligence/_resnet50.mdx
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_resnet50.mdx
@@ -1,231 +1,157 @@
-此文档介绍如何使用 CIX P1 NPU SDK 将 [ResNet50](https://github.com/onnx/models/blob/main/validated/vision/classification/resnet/model/resnet50-v1-12.onnx) 转换为 CIX SOC NPU 上可以运行的模型。
+**ResNet** 是由微软研究院提出的里程碑式深度卷积神经网络架构。它通过首创的残差学习(Residual Learning)机制,利用“跳跃连接”完美解决了深层网络难以训练的梯度消失问题,彻底打破了深度学习在模型层数上的限制。
-整体来讲有四个步骤:
-:::tip
-步骤1~3 在 x86 主机 Linux 环境下执行
-:::
+- 核心特点:专注于高精度的图像特征提取与分类任务,其强大的通用特征表达能力使其成为目标检测、语义分割等复杂视觉任务最常用的基石架构(Backbone)。
+- 版本说明:本案例采用 ResNet-50 (V1)。作为 ResNet 家族中最具代表性的中坚力量,它由 50 层深度网络组成,采用了高效的瓶颈(Bottleneck)结构。它在计算复杂度和识别准确率之间取得了完美的平衡,是目前工业界兼具高性能与高稳定性、落地应用最广泛的经典视觉模型。
-1. 下载 NPU SDK 并安装 NOE Compiler
-2. 下载模型文件 (代码和脚本)
-3. 编译模型
-4. 部署模型到 Orion O6 / O6N
+:::info[环境配置]
+需要提前配置好相关环境。
-## 下载 NPU SDK 并安装 NOE Compiler
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
-请参考 [安装 NPU SDK](./npu-introduction#npu-sdk-安装) 进行 NPU SDK 和 NOE Compiler 的安装.
+## 快速开始
-## 下载模型文件
+### 下载模型文件
-在 CIX AI Model Hub 中包含了 ResNet50 的所需文件, 请用户按照 [下载 CIX AI Model Hub](./ai-hub#下载-cix-ai-model-hub) 下载,然后到对应的目录下查看
+
```bash
-cd ai_model_hub/models/ComputeVision/Image_Classification/onnx_resnet_v1_50
+cd ai_model_hub_25_Q3/models/ComputeVision/Image_Classification/onnx_resnet_v1_50
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Image_Classification/onnx_resnet_v1_50/resnet_v1_50.cix
```
-请确认目录结构是否同下图所示。
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
```bash
-.
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Image_Classification/onnx_resnet_v1_50/model
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Image_Classification/onnx_resnet_v1_50/model/resnet50-v1-12.onnx
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Image_Classification/onnx_resnet_v1_50/model/resnet50-v1-12-sim.onnx
+```
+
+
+
+### 项目结构
+
+```txt
├── cfg
-│ └── onnx_resnet_v1_50build.cfg
├── datasets
-│ └── calib_data.npy
-├── graph.json
├── inference_npu.py
├── inference_onnx.py
+├── model
├── ReadMe.md
+├── resnet_v1_50.cix
├── test_data
-│ ├── ILSVRC2012_val_00002899.JPEG
-│ ├── ILSVRC2012_val_00004704.JPEG
-│ ├── ILSVRC2012_val_00021564.JPEG
-│ ├── ILSVRC2012_val_00024154.JPEG
-│ ├── ILSVRC2012_val_00037133.JPEG
-│ └── ILSVRC2012_val_00045790.JPEG
+├── label.txt
+├── main.cpp
+├── makefile
+├── noe_utils
└── Tutorials.ipynb
```
-## 编译模型
+### 进行模型量化和转换
-:::tip
-用户可无需从头编译模型,radxa 提供预编译好的 resnet_v1_50.cix 模型(可用下面步骤下载),如果使用预编译好的模型,可以跳过“编译模型” 这一步
+
```bash
-wget https://modelscope.cn/models/cix/ai_model_hub_24_Q4/resolve/master/models/ComputeVision/Image_Classification/onnx_resnet_v1_50/resnet_v1_50.cix
+cd ..
+cixbuild cfg/onnx_resnet_v1_50build.cfg
```
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
:::
-### 准备 onnx 模型
-
-- 下载 onnx 模型
-
- [resnet50-v1-12.onnx](https://github.com/onnx/models/blob/main/validated/vision/classification/resnet/model/resnet50-v1-12.onnx)
-
-- 简化模型
-
- 这里使用 onnxsim 进行模型输入固化和模型简化
-
- ```bash
- pip3 install onnxsim onnxruntime
- onnxsim resnet50-v1-12.onnx resnet50-v1-12-sim.onnx --overwrite-input-shape 1,3,224,224
- ```
-
-### 编译模型
-
-CIX SOC NPU 支持 INT8 计算,在编译模型前,我们需要使用 NOE Compiler 对模型进行 INT8 量化
-
-- 准备校准集
-
- - 自行准备校准集
-
- 在 `test_data` 目录下已经包含多张校准集的图片文件
-
- ```bash
- ├── test_data
- │ ├── ILSVRC2012_val_00002899.JPEG
- │ ├── ILSVRC2012_val_00004704.JPEG
- │ ├── ILSVRC2012_val_00021564.JPEG
- │ ├── ILSVRC2012_val_00024154.JPEG
- │ ├── ILSVRC2012_val_00037133.JPEG
- │ └── ILSVRC2012_val_00045790.JPEG
- ```
-
- 参考以下脚本生成校准文件
-
- ```python
- import sys
- import os
- import numpy as np
- _abs_path = os.path.join(os.getcwd(), "../../../../")
- sys.path.append(_abs_path)
- from utils.image_process import imagenet_preprocess_method1
-
- from utils.tools import get_file_list
- # Get a list of images from the provided path
- images_path = "test_data"
- images_list = get_file_list(images_path)
- data = []
- for image_path in images_list:
- input = imagenet_preprocess_method1(image_path)
- data.append(input)
- # concat the data and save calib dataset
- data = np.concatenate(data, axis=0)
- print(data.shape)
- np.save("datasets/calib_data_tmp.npy", data)
- print("Generate calib dataset success.")
- ```
-
-- 使用 NOE Compiler 量化与编译模型
-
- - 制作量化与编译 cfg 配置文件, 请参考以下配置
-
- ```bash
- [Common]
- mode = build
-
- [Parser]
- model_type = onnx
- model_name = resnet_v1_50
- detection_postprocess =
- model_domain = image_classification
- input_model = ./resnet50-v1-12-sim.onnx
- output_dir = ./
- input_shape = [1, 3, 224, 224]
- input = data
-
- [Optimizer]
- output_dir = ./
- calibration_data = datasets/calib_data_tmp.npy
- calibration_batch_size = 16
- dataset = numpydataset
- save_statistic_info = True
- cast_dtypes_for_lib = True
- global_calibration = adaround[10, 10, 32, 0.01]
-
- [GBuilder]
- target = X2_1204MP3
- outputs = resnet_v1_50.cix
- tiling = fps
- profile = True
- ```
-
- - 编译模型
- :::tip
- 如果遇到 cixbuild 报错 `[E] Optimizing model failed! CUDA error: no kernel image is available for execution on the device ...`
- 这意味着当前版本的 torch 不支持此 GPU,请完全卸载当前版本的 torch, 然后在 torch 官网下载最新版本。
- :::
- ```bash
- cixbuild ./onnx_resnet_v1_50build.cfg
- ```
-
-## 模型部署
-
-### NPU 推理
-
-将使用 NOE Compiler 编译好的 .cix 格式的模型复制到 Orion O6 / O6N 上进行模型验证
+### 测试主机推理
+
+#### 运行推理脚本
+
+
```bash
-python3 inference_npu.py --images test_data --model_path ./resnet_v1_50.cix
+python3 inference_onnx.py --images test_data --onnx_path model/resnet50-v1-12-sim.onnx
```
+
+
+#### 模型推理结果
+
+
+
```bash
-(.venv) radxa@orion-o6:~/NOE/ai_model_hub/models/ComputeVision/Image_Classification/onnx_resnet_v1_50$ time python3 inference_npu.py --images test_data --model_path ./resnet_v1_50.cix
-npu: noe_init_context success
-npu: noe_load_graph success
-Input tensor count is 1.
-Output tensor count is 1.
-npu: noe_create_job success
-image path : test_data/ILSVRC2012_val_00004704.JPEG
-plunger, plumber's helper
-image path : test_data/ILSVRC2012_val_00021564.JPEG
-coucal
+$ python3 inference_onnx.py --images test_data --onnx_path model/resnet50-v1-12-sim.onnx
image path : test_data/ILSVRC2012_val_00024154.JPEG
Ibizan hound, Ibizan Podenco
-image path : test_data/ILSVRC2012_val_00037133.JPEG
-ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
+image path : test_data/ILSVRC2012_val_00021564.JPEG
+coucal
image path : test_data/ILSVRC2012_val_00002899.JPEG
rock python, rock snake, Python sebae
image path : test_data/ILSVRC2012_val_00045790.JPEG
Yorkshire terrier
-npu: noe_clean_job success
-npu: noe_unload_graph success
-npu: noe_deinit_context success
-
-real 0m2.963s
-user 0m3.266s
-sys 0m0.414s
+image path : test_data/ILSVRC2012_val_00037133.JPEG
+ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
```
-### CPU 推理
+
+
+### 进行 NPU 部署
-使用 CPU 对 onnx 模型进行推理验证正确性,可在 X86 主机上或 Orion O6 / O6N 上运行
+#### 运行推理脚本
+
+
```bash
-python3 inference_onnx.py --images test_data --onnx_path ./resnet50-v1-12-sim.onnx
+python3 inference_npu.py --images test_data --model_path resnet_v1_50.cix
```
+
+
+#### 模型推理结果
+
+
+
```bash
-(.venv) radxa@orion-o6:~/NOE/ai_model_hub/models/ComputeVision/Image_Classification/onnx_resnet_v1_50$ time python3 inference_onnx.py --images test_data --onnx_path ./resnet50-v1-12-sim.onnx
-image path : test_data/ILSVRC2012_val_00004704.JPEG
-plunger, plumber's helper
+$ python3 inference_npu.py --images test_data --model_path resnet_v1_50.cix
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+image path : test_data/ILSVRC2012_val_00037133.JPEG
+ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
image path : test_data/ILSVRC2012_val_00021564.JPEG
coucal
image path : test_data/ILSVRC2012_val_00024154.JPEG
Ibizan hound, Ibizan Podenco
-image path : test_data/ILSVRC2012_val_00037133.JPEG
-ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
image path : test_data/ILSVRC2012_val_00002899.JPEG
rock python, rock snake, Python sebae
image path : test_data/ILSVRC2012_val_00045790.JPEG
Yorkshire terrier
-
-real 0m3.757s
-user 0m11.789s
-sys 0m0.396s
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
```
-可以看到 NPU 和 CPU 上推理的结果一致,但运行速度缩短
-
-## 参考文档
-
-论文链接: [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)
+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_scrfd-arcface.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_scrfd-arcface.mdx
new file mode 100644
index 000000000..8c3ac72fa
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_scrfd-arcface.mdx
@@ -0,0 +1,179 @@
+**SCRFD-ArcFace** 是一套集成了高效面部检测与高精度特征提取的深度学习方案。它将具备卓越尺度建模能力的 SCRFD 检测器与基于角余弦损失的 ArcFace 识别模型相结合,实现了从复杂场景抓拍到身份精准比对的全流程视觉感知。
+
+- 核心特点:支持极速的面部定位与关键点回归,具备极强的特征辨识度与抗干扰能力,广泛应用于金融级身份验证、智慧安防、无感考勤以及大规模人脸检索等场景。
+- 版本说明:本案例采用 SCRFD-ArcFace 集成架构。其中 SCRFD 凭借其对计算资源的优化分配,在不同算力平台上均能保持高效率的检测响应;ArcFace 则通过强化特征空间的类间差异,显著提升了识别的准确率。这套方案是目前计算机视觉领域在处理实时人脸识别任务时,兼顾算法鲁棒性与工业落地性能的标杆级平衡选择。
+
+:::info[环境配置]
+需要提前配置好相关环境。
+
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
+
+## 快速开始
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Face_Recognition/onnx_scrfd_arcface
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Face_Recognition/onnx_scrfd_arcface/arcface.cix
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Face_Recognition/onnx_scrfd_arcface/scrfd.cix
+```
+
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py --det_model_path ./scrfd.cix --rec_model_path ./arcface.cix --faces-dir ./datasets/faces --image_path test_data
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Face_Recognition/onnx_scrfd_arcface/model
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Face_Recognition/onnx_scrfd_arcface/model/det_10g.onnx
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Face_Recognition/onnx_scrfd_arcface/model/det_2_5g.onnx
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Face_Recognition/onnx_scrfd_arcface/model/det_500m.onnx
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Face_Recognition/onnx_scrfd_arcface/model/w600k_mbf.onnx
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Face_Recognition/onnx_scrfd_arcface/model/w600k_r50.onnx
+```
+
+
+
+### 项目结构
+
+```txt
+├── cfg
+├── datasets
+├── model
+├── test_data
+├── arcface.cix
+├── arcface_npu.py
+├── arcface_onnx.py
+├── scrfd.cix
+├── scrfd_npu.py
+├── scrfd_onnx.py
+├── inference_npu.py
+├── inference_onnx.py
+├── helpers.py
+├── ReadMe.md
+└── requirements.txt
+```
+
+### 进行模型量化和转换
+
+#### 转换 Scrfd 模型
+
+
+
+```bash
+cd ..
+cixbuild cfg/onnx_scrfdbuild.cfg
+```
+
+
+
+#### 转换 ArcFace 模型
+
+
+
+```bash
+cixbuild cfg/onnx_arcfacebuild.cfg
+```
+
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
+:::
+
+### 测试主机推理
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_onnx.py --det_onnx_path ./model/det_10g.onnx --rec_onnx_path ./model/w600k_r50.onnx --faces-dir ./datasets/faces --image_path test_data
+```
+
+
+
+#### 模型推理结果
+
+
+
+{" "}
+
+

+

+
+
+
+### 进行 NPU 部署
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_npu.py --det_model_path ./scrfd.cix --rec_model_path ./arcface.cix --faces-dir ./datasets/faces --image_path test_data
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_npu.py --det_model_path ./scrfd.cix --rec_model_path ./arcface.cix --faces-dir ./datasets/faces --image_path test_data
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 9.
+npu: noe_create_job success
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+./datasets/faces/Monica.png
+./datasets/faces/Phoebe.png
+./datasets/faces/Rachel.png
+./datasets/faces/Chandler.png
+./datasets/faces/Joey.png
+./datasets/faces/Ross.png
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+```
+
+
+
+
+
+{" "}
+
+

+

+
+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_sd-v1-4.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_sd-v1-4.mdx
new file mode 100644
index 000000000..e59b65d4c
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_sd-v1-4.mdx
@@ -0,0 +1,250 @@
+**Stable Diffusion** 是一款基于潜空间扩散机制的文本生成图像模型。它通过将图像压缩至低维的隐向量空间进行去噪训练,彻底改变了生成式 AI 对计算资源的高度依赖,使得在消费级显卡上生成高质量、高艺术性的图像成为可能。
+
+- 核心特点:支持强大的文本生成图像(Text-to-Image)、图像理解与重绘(Image-to-Image)以及局部绘制(Inpainting)功能,能够根据自然语言描述生成极具视觉冲击力的艺术作品。
+- 版本说明:本案例采用 Stable Diffusion v1.4 模型。作为该系列的首个工业级主流版本,它基于数亿张图像对进行了深度预训练,具备极强的审美表达能力与指令遵循能力。该模型在生成效果与显存占用之间取得了极佳的平衡,是目前生成式 AI 领域生态最丰富、插件兼容性最广的经典模型。
+
+:::info[环境配置]
+需要提前配置好相关环境。
+
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
+
+## 快速开始
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/Generative_AI/Text_to_Image/onnx_stable_diffusion_v1_4
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Generative_AI/Text_to_Image/onnx_stable_diffusion_v1_4/decoder.cix
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Generative_AI/Text_to_Image/onnx_stable_diffusion_v1_4/default_seed.npy
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Generative_AI/Text_to_Image/onnx_stable_diffusion_v1_4/encoder.cix
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Generative_AI/Text_to_Image/onnx_stable_diffusion_v1_4/uncondition.npy
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Generative_AI/Text_to_Image/onnx_stable_diffusion_v1_4/unet.cix
+```
+
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/Generative_AI/Text_to_Image/onnx_stable_diffusion_v1_4/model/decoder
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Generative_AI/Text_to_Image/onnx_stable_diffusion_v1_4/model/decoder/decoder.onnx
+cd ../encoder
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Generative_AI/Text_to_Image/onnx_stable_diffusion_v1_4/model/encoder/encoder.onnx
+cd ../unet
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Generative_AI/Text_to_Image/onnx_stable_diffusion_v1_4/model/unet/unet.onnx
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Generative_AI/Text_to_Image/onnx_stable_diffusion_v1_4/model/unet/weights.pb
+```
+
+
+
+### 项目结构
+
+```txt
+├── cfg
+├── datasets
+├── decoder.cix
+├── default_seed.npy
+├── encoder.cix
+├── inference_npu.py
+├── inference_onnx.py
+├── model
+├── ReadMe.md
+├── tokenizer
+├── uncondition.npy
+└── unet.cix
+```
+
+### 进行模型量化和转换
+
+#### 转换文本编码器
+
+
+
+```bash
+cd ../..
+cixbuild cfg/encoder/encoderbuild.cfg
+```
+
+
+
+#### 转换 U-Net 网络
+
+
+
+```bash
+cixbuild cfg/unet/unetbuild.cfg
+```
+
+
+
+#### 转换 VAE 解码器
+
+
+
+```bash
+cixbuild cfg/decoder/decoderbuild.cfg
+```
+
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
+:::
+
+### 测试主机推理
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_onnx.py
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_onnx.py
+please input prompt text: majestic crystal mountains under aurora borealis, fantasy landscape, trending on artstation
+using unified predictor-corrector with order 1 (solver type: B(h))
+using corrector
+using unified predictor-corrector with order 2 (solver type: B(h))
+using corrector
+using unified predictor-corrector with order 2 (solver type: B(h))
+using corrector
+using unified predictor-corrector with order 2 (solver type: B(h))
+using corrector
+using unified predictor-corrector with order 2 (solver type: B(h))
+using corrector
+using unified predictor-corrector with order 2 (solver type: B(h))
+using corrector
+using unified predictor-corrector with order 2 (solver type: B(h))
+using corrector
+do not run corrector at the last step
+using unified predictor-corrector with order 1 (solver type: B(h))
+Decoder:
+SD time : 56.92895817756653
+```
+
+
+
+#### 生成图片
+
+
+
+

+
+
+
+### 进行 NPU 部署
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+#### 模型运行结果
+
+
+
+```bash
+$ python3 inference_npu.py
+please input prompt text: a single wilting rose on a marble table, cinematic lighting, moody atmosphere
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 3.
+Output tensor count is 1.
+npu: noe_create_job success
+npu: noe_clean_job success
+using unified predictor-corrector with order 1 (solver type: B(h))
+using corrector
+npu: noe_create_job success
+npu: noe_clean_job success
+using unified predictor-corrector with order 2 (solver type: B(h))
+using corrector
+npu: noe_create_job success
+npu: noe_clean_job success
+using unified predictor-corrector with order 2 (solver type: B(h))
+using corrector
+npu: noe_create_job success
+npu: noe_clean_job success
+using unified predictor-corrector with order 2 (solver type: B(h))
+using corrector
+npu: noe_create_job success
+npu: noe_clean_job success
+using unified predictor-corrector with order 2 (solver type: B(h))
+using corrector
+npu: noe_create_job success
+npu: noe_clean_job success
+using unified predictor-corrector with order 2 (solver type: B(h))
+using corrector
+npu: noe_create_job success
+npu: noe_clean_job success
+using unified predictor-corrector with order 2 (solver type: B(h))
+using corrector
+npu: noe_create_job success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+do not run corrector at the last step
+using unified predictor-corrector with order 1 (solver type: B(h))
+Decoder:
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+SD time : 20.26415753364563
+```
+
+
+
+#### 生成图片
+
+
+
+

+
+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_ultra-fast-lane-detection-v2.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_ultra-fast-lane-detection-v2.mdx
new file mode 100644
index 000000000..fc1194539
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_ultra-fast-lane-detection-v2.mdx
@@ -0,0 +1,143 @@
+**Ultra Fast Lane Detection (UFLD)** 是一类专注于车道线检测的极速深度学习算法。它改变了传统的像素级分割思路,通过创新的基于行选择(Row-based Selection)的分类机制,将检测任务转化为简单的分类问题,极大地提升了模型的运行速度。
+
+- 核心特点:专注于道路场景下的实时车道线检测,能够快速准确地勾勒出车道边界,为自动驾驶系统的车道保持(LKA)和车道偏离预警(LDW)提供核心视觉支持。
+- 版本说明:本案例采用 Ultra Fast Lane Detection V2 (UFLDv2) 模型。作为该系列的进阶版本,它引入了混合锚点机制,不仅增强了对曲线及复杂遮挡场景的检测鲁棒性,还保持了该系列一贯的极速推理优势。它在保证低延迟的同时进一步提升了空间结构的捕捉能力,是目前车载嵌入式端实现高效实时车道感知的主流平衡选择。
+
+:::info[环境配置]
+需要提前配置好相关环境。
+
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
+
+## 快速开始
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Lane_Detection/onnx_Ultra_Fast_Lane_Detection_v2
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Lane_Detection/onnx_Ultra_Fast_Lane_Detection_v2/Ultra_Fast_Lane_Detection_v2.cix
+```
+
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Lane_Detection/onnx_Ultra_Fast_Lane_Detection_v2/model
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Lane_Detection/onnx_Ultra_Fast_Lane_Detection_v2/model/Ultra_Fast_Lane_Detection_v2.onnx
+```
+
+
+
+### 项目结构
+
+```txt
+├── cfg
+├── datasets
+├── inference_npu.py
+├── inference_onnx.py
+├── model
+├── ReadMe.md
+├── test_data
+└── Ultra_Fast_Lane_Detection_v2.cix
+```
+
+### 进行模型量化和转换
+
+
+
+```bash
+cd ..
+cixbuild cfg/Ultra-Fast-Lane-Detection_v2build.cfg
+```
+
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
+:::
+
+### 测试主机推理
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_onnx.py
+```
+
+
+
+#### 模型推理结果
+
+
+
+### 进行 NPU 部署
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_npu.py
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 4.
+npu: noe_create_job success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+```
+
+
+
+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_vdsr.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_vdsr.mdx
index fd0101f76..54d86344e 100644
--- a/docs/common/orion-common/app-dev/artificial-intelligence/_vdsr.mdx
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_vdsr.mdx
@@ -1,227 +1,139 @@
-此文档介绍如何使用 CIX P1 NPU SDK 将 [VDSR](https://github.com/twtygqyy/pytorch-vdsr) 转换为 CIX SOC NPU 上可以运行的模型。
+**VDSR** 是图像超分辨率技术发展史上的里程碑式架构。它首次成功将卷积神经网络的深度提升至 20 层,通过引入全局残差学习(Global Residual Learning)机制,解决了深层网络难以收敛的问题,并确立了通过学习“图像残差”而非直接学习像素值的核心思路。
-整体来讲有四个步骤:
-:::tip
-步骤1~3 在 x86 主机 Linux 环境下执行
-:::
+- 核心特点:支持对单一模型进行多倍率(如 2x, 3x, 4x)的图像超分辨率重建,具备强大的边缘恢复能力与纹理细节增强效果,广泛应用于高清视频转换、数字图像修复及医学影像增强等领域。
+- 版本说明:本案例采用 VDSR 标准架构。该模型通过极大的感受野有效捕捉图像的上下文信息,并利用高学习率配合梯度裁剪技术实现了高效的训练过程。作为深度超分辨率算法的开山之作,它在保持结构简洁性的同时,提供了远超传统插值算法的视觉清晰度,是目前研究超分辨率技术演进与工业落地的经典基石选择。
-1. 下载 NPU SDK 并安装 NOE Compiler
-2. 下载模型文件 (代码和脚本)
-3. 编译模型
-4. 部署模型到 Orion O6 / O6N
+:::info[环境配置]
+需要提前配置好相关环境。
-## 下载 NPU SDK 并安装 NOE Compiler
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
-请参考 [安装 NPU SDK](./npu-introduction#npu-sdk-安装) 进行 NPU SDK 和 NOE Compiler 的安装.
+## 快速开始
-## 下载模型文件
+### 下载模型文件
-在 CIX AI Model Hub 中包含了 VDSR 的所需文件, 请用户按照 [下载 CIX AI Model Hub](./ai-hub#下载-cix-ai-model-hub) 下载
+
```bash
-cd ai_model_hub/models/ComputeVision/Super_Resolution/onnx_vdsr
+cd ai_model_hub_25_Q3/models/ComputeVision/Super_Resolution/onnx_vdsr
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Super_Resolution/onnx_vdsr/vdsr.cix
```
-请确认目录结构是否同下图所示。
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
```bash
-.
+cd ai_model_hub_25_Q3/models/ComputeVision/Super_Resolution/onnx_vdsr/model
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Super_Resolution/onnx_vdsr/model/vdsr.onnx
+```
+
+
+
+### 项目结构
+
+```txt
├── cfg
-│ └── onnx_vdsr_build.cfg
├── datasets
-│ └── calib_dataset.npy
-├── graph.json
├── inference_npu.py
├── inference_onnx.py
-├── output
-│ └── butterfly_comparison.png
+├── model
├── ReadMe.md
-└── test_data
- ├── butterfly_GT.bmp
- ├── butterfly_GT_scale_2.bmp
- ├── butterfly_GT_scale_3.bmp
- └── butterfly_GT_scale_4.bmp
+├── test_data
+└── vdsr.cix
```
-## 编译模型
+### 进行模型量化和转换
-:::tip
-用户可无需从头编译模型,radxa 提供预编译好的 vdsr.cix 模型(可用下面步骤下载),如果使用预编译好的模型,可以跳过“编译模型” 这一步
+
```bash
-wget https://modelscope.cn/models/cix/ai_model_hub_24_Q4/resolve/master/models/ComputeVision/Super_Resolution/onnx_vdsr/vdsr.cix
+cd ..
+cixbuild cfg/onnx_vdsr_build.cfg
```
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
:::
-### 准备 onnx 模型
-
-- 下载 onnx 模型
-
- [vdsr.onnx](https://modelscope.cn/models/cix/ai_model_hub_24_Q4/resolve/master/models/ComputeVision/Super_Resolution/onnx_vdsr/model/vdsr.onnx)
-
-- 简化模型
-
- 这里使用 onnxsim 进行模型输入固化和模型简化
-
- ```bash
- pip3 install onnxsim onnxruntime
- onnxsim vdsr.onnx vdsr-sim.onnx --overwrite-input-shape 1,1,256,256
- ```
-
-### 编译模型
-
-CIX SOC NPU 支持 INT8 计算,在编译模型前,我们需要使用 NOE Compiler 对模型进行 INT8 量化
-
-- 准备校准集
-
- - 使用 `datasets` 现有校准集
-
- ```bash
- .
- └── calibration_data.npy
- ```
-
- - 自行准备校准集
-
- 在 `test_data` 目录下已经包含多张校准集的图片文件
-
- ```bash
- .
- ├── 1.jpeg
- └── 2.jpeg
- ```
-
- 参考以下脚本生成校准文件
-
- ```python
- import sys
- import os
- import numpy as np
- import cv2
- _abs_path = os.path.join(os.getcwd(), "../../../../")
- sys.path.append(_abs_path)
- from utils.image_process import normalize_image
- from utils.tools import get_file_list
- # Get a list of images from the provided path
- images_path = "test_data"
- images_list = get_file_list(images_path)
- data = []
- for image_path in images_list:
- image_numpy = cv2.imread(image_path)
- image_numpy = cv2.resize(image_numpy, (256, 256))
- image_gray = cv2.cvtColor(image_numpy,cv2.COLOR_BGR2GRAY)
- image_ex = np.expand_dims(image_gray, 0)
- input = normalize_image(image_ex)
- data.append(input)
- # concat the data and save calib dataset
- data = np.concatenate(data, axis=0)
- np.save("datasets/calib_data_tmp.npy", data)
- print("Generate calib dataset success.")
- ```
-
-- 使用 NOE Compiler 量化与编译模型
-
- - 制作量化与编译 cfg 配置文件, 请参考以下配置
-
- ```bash
- [Common]
- mode = build
-
- [Parser]
- model_type = onnx
- model_name = vdsr
- input_model = ./vdsr-sim.onnx
- input = input.1
- input_shape = [1,1,256,256]
- output_dir = ./out
-
- [Optimizer]
- metric_batch_size = 1
- dataset = numpydataset
- calibration_data = ./datasets/calib_data_tmp.npy
- calibration_batch_size = 1
- calibration_strategy_for_activation = extrema & <[Convolution]:mean>
- quantize_method_for_weight = per_channel_symmetric_full_range
- quantize_method_for_activation = per_tensor_asymmetric
- activation_bits = 8
- weight_bits = 8
- bias_bits = 32
- cast_dtypes_for_lib = True
- output_dir = ./out
- save_statistic_info=False
-
- [GBuilder]
- outputs=vdsr.cix
- target=X2_1204MP3
- tiling= fps
- ```
-
- - 编译模型
- :::tip
- 如果遇到 cixbuild 报错 `[E] Optimizing model failed! CUDA error: no kernel image is available for execution on the device ...`
- 这意味着当前版本的 torch 不支持此 GPU,请完全卸载当前版本的 torch, 然后在 torch 官网下载最新版本。
- :::
- ```bash
- cixbuild ./onnx_deeplab_v3_build.cfg
- ```
-
-## 模型部署
-
-### NPU 推理
-
-将使用 NOE Compiler 编译好的 .cix 格式的模型复制到 Orion O6 / O6N 上进行模型验证
+### 测试主机推理
+
+#### 运行推理脚本
+
+
```bash
-python3 inference_npu.py --images ./test_data/ --model_path ./vdsr.cix
+python3 inference_onnx.py
```
-```bash
-(.venv) radxa@orion-o6:~/NOE/ai_model_hub/models/ComputeVision/Super_Resolution/onnx_vdsr$ time python3 inference_npu.py --images ./test_data/ --model_path ./vdsr.cix
-npu: noe_init_context success
-npu: noe_load_graph success
-Input tensor count is 1.
-Output tensor count is 1.
-npu: noe_create_job success
-Scale= 4
-PSNR_bicubic= 20.777296489759777
-PSNR_predicted= 25.375403931263882
-npu: noe_clean_job success
-npu: noe_unload_graph success
-npu: noe_deinit_context success
+
-real 0m2.837s
-user 0m3.270s
-sys 0m0.223s
-```
+#### 模型推理结果
+
+
+
+{" "}
-结果保存在 `output_npu` 文件夹中
+

-
+
-### CPU 推理
+### 进行 NPU 部署
-使用 CPU 对 onnx 模型进行推理验证正确性,可在 X86 主机上或 Orion O6 / O6N 上运行
+#### 运行推理脚本
+
+
```bash
-python3 inference_onnx.py --images ./test_data/ --onnx_path ./deeplabv3_resnet50-sim.onnx
+python3 inference_npu.py
```
-```bash
-(.venv) radxa@orion-o6:~/NOE/ai_model_hub/models/ComputeVision/Semantic_Segmentation/onnx_deeplab_v3$ time python3 inference_onnx.py --images ./test_data/ --onnx_path ./deeplabv3_resnet50-sim.onnx
-save output: onnx_ILSVRC2012_val_00004704.JPEG
+
-real 0m7.605s
-user 0m33.235s
-sys 0m0.558s
+#### 模型推理结果
+
+
+```bash
+$ python3 inference_npu.py
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
```
-结果保存在 `output_onnx` 文件夹中
-
+
+
+
-可以看到 NPU 和 CPU 上推理的结果一致,但运行速度大幅缩短
+{" "}
-## 参考文档
+

-论文链接: [Accurate Image Super-Resolution Using Very Deep Convolutional Networks](https://arxiv.org/abs/1511.04587)
+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_whisper-medium.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_whisper-medium.mdx
new file mode 100644
index 000000000..a1b66cf7b
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_whisper-medium.mdx
@@ -0,0 +1,195 @@
+**Whisper** 是由 OpenAI 推出的开源通用语音识别模型。它通过 68 万小时的大规模多语种数据预训练,具备极强的鲁棒性,能够从容应对复杂背景噪声和各类口音。
+
+- 核心特点:支持高精度的多语种语音转文字、语种自动检测以及语音翻译。
+- 版本说明:本案例采用 Whisper Medium Multilingual 模型。作为家族中的中量级成员,它在保证中文及多语言识别准确率的同时,兼顾了推理效率,是目前兼具性能与速度的主流平衡选择。
+
+:::info[环境配置]
+需要提前配置好相关环境。
+
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
+
+## 快速开始
+
+### 下载模型
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/Audio/Speech_Recognotion/onnx_whisper_medium_multilingual
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Audio/Speech_Recognotion/onnx_whisper_medium_multilingual/whisper_medium_multilingual_decoder.cix
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Audio/Speech_Recognotion/onnx_whisper_medium_multilingual/whisper_medium_multilingual_encoder.cix
+```
+
+
+
+### 安装依赖
+
+
+
+```bash
+sudo apt update
+sudo apt install ffmpeg
+```
+
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/Audio/Speech_Recognotion/onnx_whisper_medium_multilingual/model
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Audio/Speech_Recognotion/onnx_whisper_medium_multilingual/model/whisper_medium_multilingual_decoder.onnx
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/Audio/Speech_Recognotion/onnx_whisper_medium_multilingual/model/whisper_medium_multilingual_encoder.onnx
+```
+
+
+
+### 项目结构
+
+```txt
+.
+├── cfg
+├── datasets
+├── inference_npu.py
+├── inference_onnx.py
+├── model
+├── ReadMe.md
+├── test_data
+├── whisper
+├── whisper-medium
+├── whisper_medium_multilingual_decoder.cix
+└── whisper_medium_multilingual_encoder.cix
+```
+
+### 进行模型量化和转换
+
+转换编码器部分
+
+
+
+```bash
+cd ..
+cixbuild cfg/whisper_medium_multilingual_encoder/whisper_medium_multilingual_encoder_build.cfg
+```
+
+
+
+转换解码器部分
+
+
+
+```bash
+cixbuild cfg/whisper_medium_multilingual_decoder/whisper_medium_multilingual_decoder_build.cfg
+```
+
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
+:::
+
+### 测试主机推理
+
+#### 安装 ffmpeg
+
+
+
+```bash
+sudo apt update
+sudo apt install ffmpeg
+```
+
+
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_onnx.py
+```
+
+
+
+#### 模型推理结果
+
+会在 output 目录下生成 test_audio_npu.txt 文件。
+
+```txt
+They regain their apartment, apparently without disturbing the household of Gainwell.
+```
+
+### 进行 NPU 部署
+
+#### 安装 ffmpeg
+
+
+
+```bash
+sudo apt update
+sudo apt install ffmpeg
+```
+
+
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_npu.py --backend npu --encoder_model_path whisper_medium_multilingual_encoder.cix --decoder_model_path whisper_medium_multilingual_decoder.cix
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_npu.py --backend npu --encoder_model_path whisper_medium_multilingual_encoder.cix --decoder_model_path whisper_medium_multilingual_decoder.cix
+2025-12-29 10:55:26.758036920 [W:onnxruntime:Default, device_discovery.cc:164 DiscoverDevicesForPlatform] GPU device discovery failed: device_discovery.cc:89 ReadFileContents Failed to open file: "/sys/class/drm/card3/device/vendor"
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 5.
+Output tensor count is 2.
+npu: noe_create_job success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+```
+
+
+
+会在 output 目录下生成 test_audio_npu.txt 文件。
+
+```txt
+They regain their apartment, apparently without disturbing the household of Gainwell.
+```
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_yolov8.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_yolov8.mdx
deleted file mode 100644
index d66228510..000000000
--- a/docs/common/orion-common/app-dev/artificial-intelligence/_yolov8.mdx
+++ /dev/null
@@ -1,225 +0,0 @@
-此文档介绍如何使用 CIX P1 NPU SDK 将 [YOLOv8](https://github.com/ultralytics/ultralytics/tree/v8.1.43) 转换为 CIX SOC NPU 上可以运行的模型。
-
-整体来讲有四个步骤:
-:::tip
-步骤1~3 在 x86 主机 Linux 环境下执行
-:::
-
-1. 下载 NPU SDK 并安装 NOE Compiler
-2. 下载模型文件 (代码和脚本)
-3. 编译模型
-4. 部署模型到 Orion O6 / O6N
-
-## 下载 NPU SDK 并安装 NOE Compiler
-
-请参考 [安装 NPU SDK](./npu-introduction#npu-sdk-安装) 进行 NPU SDK 和 NOE Compiler 的安装.
-
-## 下载模型文件
-
-在 CIX AI Model Hub 中包含了 YOLOv8 的所需文件, 请用户按照 [下载 CIX AI Model Hub](./ai-hub#下载-cix-ai-model-hub) 下载,然后到对应的目录下查看
-
-```bash
-cd ai_model_hub/models/ComputeVision/Object_Detection/onnx_yolov8_l
-```
-
-请确认目录结构是否同下图所示。
-
-```bash
-.
-├── cfg
-│ └── yolov8_lbuild.cfg
-├── datasets
-│ ├── calibration_data.npy
-│ └── input0.bin
-├── graph.json
-├── inference_npu.py
-├── inference_onnx.py
-├── ReadMe.md
-└── test_data
- ├── 1.jpeg
- └── ILSVRC2012_val_00004704.JPEG
-```
-
-## 编译模型
-
-:::tip
-用户可无需从头编译模型,radxa 提供预编译好的 yolov8_l.cix 模型(可用下面步骤下载),如果使用预编译好的模型,可以跳过“编译模型” 这一步
-
-```bash
-wget https://modelscope.cn/models/cix/ai_model_hub_24_Q4/resolve/master/models/ComputeVision/Object_Detection/onnx_yolov8_l/yolov8_l.cix
-```
-
-:::
-
-### 准备 onnx 模型
-
-- 下载 onnx 模型
-
- [yolov8l.onnx](https://modelscope.cn/models/cix/ai_model_hub_24_Q4/resolve/master/models/ComputeVision/Object_Detection/onnx_yolov8_l/model/yolov8l.onnx)
-
-- 简化模型
-
- 这里使用 onnxsim 进行模型输入固化和模型简化
-
- ```bash
- pip3 install onnxsim onnxruntime
- onnxsim yolov8l.onnx yolov8l-sim.onnx --overwrite-input-shape 1,3,640,640
- ```
-
-### 编译模型
-
-CIX SOC NPU 支持 INT8 计算,在编译模型前,我们需要使用 NOE Compiler 对模型进行 INT8 量化
-
-- 准备校准集
-
- - 使用 `datasets` 现有校准集
-
- ```bash
- .
- └── calibration_data.npy
- ```
-
- - 自行准备校准集
-
- 在 `test_data` 目录下已经包含多张校准集的图片文件
-
- ```bash
- .
- ├── 1.jpeg
- └── ILSVRC2012_val_00004704.JPEG
- ```
-
- 参考以下脚本生成校准文件
-
- ```python
- import sys
- import os
- import numpy as np
- _abs_path = os.path.join(os.getcwd(), "../../../../")
- sys.path.append(_abs_path)
- from utils.image_process import preprocess_object_detect_method1
- from utils.tools import get_file_list
- # Get a list of images from the provided path
- images_path = "test_data"
- images_list = get_file_list(images_path)
- data = []
- for image_path in images_list:
- input = preprocess_object_detect_method1(image_path, (640, 640))[3]
- data.append(input)
- # concat the data and save calib dataset
- data = np.concatenate(data, axis=0)
- np.save("datasets/calib_data_tmp.npy", data)
- print("Generate calib dataset success.")
- ```
-
-- 使用 NOE Compiler 量化与编译模型
-
- - 制作量化与编译 cfg 配置文件, 请参考以下配置
-
- ```bash
- [Common]
- mode = build
-
- [Parser]
- model_type = ONNX
- model_name = yolov8_l
- detection_postprocess =
- model_domain = OBJECT_DETECTION
- input_data_format = NCHW
- input_model = ./yolov8l-sim.onnx
- input = images
- input_shape = [1, 3, 640, 640]
- output_dir = ./
-
- [Optimizer]
- dataset = numpydataset
- calibration_data = datasets/calib_data_tmp.npy
- calibration_batch_size = 1
- output_dir = ./
- dump_dir = ./
- quantize_method_for_activation = per_tensor_asymmetric
- quantize_method_for_weight = per_channel_symmetric_restricted_range
- save_statistic_info = True
- trigger_float_op = disable & <[(258, 272)]:float16_preferred!>
- weight_bits = 8& <[(273,274)]:16>
- activation_bits = 8& <[(273,274)]:16>
- bias_bits = 32& <[(273,274)]:48>
-
- [GBuilder]
- target = X2_1204MP3
- outputs = yolov8_l.cix
- tiling = fps
- profile = True
- ```
-
- - 编译模型
- :::tip
- 如果遇到 cixbuild 报错 `[E] Optimizing model failed! CUDA error: no kernel image is available for execution on the device ...`
- 这意味着当前版本的 torch 不支持此 GPU,请完全卸载当前版本的 torch, 然后在 torch 官网下载最新版本。
- :::
- ```bash
- cixbuild ./yolov8_lbuild.cfg
- ```
-
-## 模型部署
-
-### NPU 推理
-
-将使用 NOE Compiler 编译好的 .cix 格式的模型复制到 Orion O6 / O6N 开发板上进行模型验证
-
-```bash
-python3 inference_npu.py --image_path ./test_data/ --model_path ./yolov8_l.cix
-```
-
-```bash
-v) radxa@orion-o6:~/NOE/ai_model_hub/models/ComputeVision/Object_Detection/onnx_yolov8_l$ time python3 inference_npu.py --image_path ./test_data/ --model_path ./yolov8_l.cix
-npu: noe_init_context success
-npu: noe_load_graph success
-Input tensor count is 1.
-Output tensor count is 1.
-npu: noe_create_job success
-100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 8.08it/s]
-npu: noe_clean_job success
-npu: noe_unload_graph success
-npu: noe_deinit_context success
-
-real 0m3.884s
-user 0m3.391s
-sys 0m0.400s
-```
-
-结果保存在 `output_npu` 文件夹中
-
-
-
-
-
-### CPU 推理
-
-使用 CPU 对 onnx 模型进行推理验证正确性,可在 X86 主机上或 Orion O6 / O6N 上运行
-
-```bash
-python3 inference_onnx.py --image_path ./test_data/ --onnx_path ./yolov8l.onnx
-```
-
-```bash
-(.venv) radxa@orion-o6:~/NOE/ai_model_hub/models/ComputeVision/Object_Detection/onnx_yolov8_l$ time python3 inference_onnx.py --image_path ./test_data/ --onnx_path ./yolov8l.onnx
-/usr/local/lib/python3.11/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'ZhouyiExecutionProvider, CPUExecutionProvider'
- warnings.warn(
-100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.86s/it]
-
-real 0m6.671s
-user 0m37.881s
-sys 0m0.616s
-```
-
-结果保存在 `output_onnx` 文件夹中
-
-
-
-
-可以看到 NPU 和 CPU 上推理的结果一致,但运行速度大幅缩短
-
-## 参考文档
-
-论文链接: [What is YOLOv8: An In-Depth Exploration of the Internal Features of the Next-Generation Object Detector](https://arxiv.org/abs/2408.15857)
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_yolov8n.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_yolov8n.mdx
new file mode 100644
index 000000000..9433693aa
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_yolov8n.mdx
@@ -0,0 +1,159 @@
+**YOLOv8n** 是由 Ultralytics 推出的 YOLOv8 系列中体积最小、速度最快的轻量级视觉模型。它基于先进的深度学习架构,在极低的计算资源消耗下实现了卓越的实时检测性能,是边缘设备和移动端部署的首选方案。
+
+- 核心特点:支持高精度的实时目标检测、实例分割、图像分类以及姿态估计(关键点检测)。
+- 版本说明:本案例采用 **YOLOv8n (Nano)** 模型。作为家族中的轻量化标杆,它通过极简的参数量实现了极高的推理帧率,在保持主流检测精度的同时,极大降低了对硬件算力的需求,是兼顾实时响应与部署便捷性的最优选择。
+
+:::info[环境配置]
+需要提前配置好相关环境。
+
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
+
+## 快速开始
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Object_Detection/onnx_yolov8_n
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Object_Detection/onnx_yolov8_n/yolov8n.cix
+```
+
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Object_Detection/onnx_yolov8_n/model
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Object_Detection/onnx_yolov8_n/model/yolov8n.onnx
+```
+
+
+
+### 项目结构
+
+```txt
+├── cfg
+├── datasets
+├── inference_npu.py
+├── inference_onnx.py
+├── model
+├── ReadMe.md
+├── test_data
+└── yolov8n.cix
+```
+
+### 进行模型量化和转换
+
+
+
+```bash
+cd ..
+cixbuild cfg/yolov8_nbuild.cfg
+```
+
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
+:::
+
+### 测试主机推理
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_onnx.py
+```
+
+
+
+#### 主机推理结果
+
+
+
+```bash
+$ python3 inference_onnx.py
+100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 19.24it/s]
+```
+
+
+
+
+

+

+
+
+### 进行 NPU 部署
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_npu.py
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 18.07it/s]
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+```
+
+
+
+
+

+

+
diff --git a/docs/common/orion-common/app-dev/artificial-intelligence/_yolov8s-pose.mdx b/docs/common/orion-common/app-dev/artificial-intelligence/_yolov8s-pose.mdx
new file mode 100644
index 000000000..b5a62e864
--- /dev/null
+++ b/docs/common/orion-common/app-dev/artificial-intelligence/_yolov8s-pose.mdx
@@ -0,0 +1,141 @@
+**YOLOv8-pose** 是由 Ultralytics 推出的专门用于人体姿态估计的先进深度学习模型。它继承了 YOLOv8 系列在目标检测领域的卓越架构,通过将物体检测与关键点定位集成在单阶段推理流程中,实现了对复杂人体动作的高效捕捉。
+
+- 核心特点:支持实时的人体关键点检测与姿态识别,能够精准定位人体骨架关节点,广泛应用于人体运动分析、交互式游戏、行为监控及康复指导等领域。
+- 版本说明:本案例采用 YOLOv8s-pose 模型。作为系列中的轻量化进阶版本,它在保持极高推理速度的同时,通过增加模型深度和通道数,增强了对复杂背景及肢体遮挡场景下的特征提取能力。它是目前在性能与实时性之间寻找平衡点的理想选择,尤其适用于需要兼顾检测准确率与端侧部署效率的实际应用场景。
+
+:::info[环境配置]
+需要提前配置好相关环境。
+
+- [环境配置](../../../../orion/o6/app-development/artificial-intelligence/env-setup.md)
+- [AI Model Hub](../../../../orion/o6/app-development/artificial-intelligence/ai-hub.md)
+ :::
+
+## 快速开始
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Pose_Estimation/onnx_yolov8s_pose
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Pose_Estimation/onnx_yolov8s_pose/yolov8s-pose.cix
+```
+
+
+
+### 模型测试
+
+:::info
+运行前激活虚拟环境!
+:::
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+## 完整转换流程
+
+### 下载模型文件
+
+
+
+```bash
+cd ai_model_hub_25_Q3/models/ComputeVision/Pose_Estimation/onnx_yolov8s_pose/model
+wget https://www.modelscope.cn/models/cix/ai_model_hub_25_Q3/resolve/master/models/ComputeVision/Pose_Estimation/onnx_yolov8s_pose/model/yolov8s-pose.onnx
+```
+
+
+
+### 项目结构
+
+```txt
+├── cfg
+├── datasets
+├── inference_npu.py
+├── inference_onnx.py
+├── model
+├── ReadMe.md
+├── test_data
+└── yolov8s-pose.cix
+```
+
+### 进行模型量化和转换
+
+
+
+```bash
+cd ..
+cixbuild cfg/yolov8s_posebuild.cfg
+```
+
+
+
+:::info[推送到板端]
+完成模型转换之后需要将 cix 模型文件推送到板端。
+:::
+
+### 测试主机推理
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_onnx.py
+```
+
+
+
+#### 模型推理结果
+
+
+
+{" "}
+
+

+

+
+
+
+### 进行 NPU 部署
+
+#### 运行推理脚本
+
+
+
+```bash
+python3 inference_npu.py
+```
+
+
+
+#### 模型推理结果
+
+
+
+```bash
+$ python3 inference_npu.py
+npu: noe_init_context success
+npu: noe_load_graph success
+Input tensor count is 1.
+Output tensor count is 1.
+npu: noe_create_job success
+npu: noe_clean_job success
+npu: noe_unload_graph success
+npu: noe_deinit_context success
+```
+
+
+
+
+
+{" "}
+
+

+

+
+
diff --git a/docs/orion/o6/app-development/README.md b/docs/orion/o6/app-development/README.md
index 0395fd871..042e8be0f 100644
--- a/docs/orion/o6/app-development/README.md
+++ b/docs/orion/o6/app-development/README.md
@@ -4,6 +4,6 @@ sidebar_position: 50
# 应用开发
-主要介绍上层应用开发,比如 NPU 应用开发等
+主要介绍上层应用开发,比如 NPU 应用开发等。
diff --git a/docs/orion/o6/app-development/artificial-intelligence/API-manual.md b/docs/orion/o6/app-development/artificial-intelligence/API-manual.md
new file mode 100644
index 000000000..cd65a3a1c
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/API-manual.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 9
+---
+
+import API_Manual from "../../../../common/orion-common/app-dev/artificial-intelligence/\_API-manual.mdx";
+
+# API 手册
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Audio/README.md b/docs/orion/o6/app-development/artificial-intelligence/Audio/README.md
new file mode 100644
index 000000000..c9aacd55b
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Audio/README.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 3
+---
+
+# 语音模型
+
+该部分主要演示部分有代表性的**语音模型**在 Radxa O6 / O6N 上的部署。
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Audio/whisper-medium.md b/docs/orion/o6/app-development/artificial-intelligence/Audio/whisper-medium.md
new file mode 100644
index 000000000..4911423ba
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Audio/whisper-medium.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 1
+---
+
+import Whisper_Medium from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_whisper-medium.mdx";
+
+# Whisper
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/GenAI/README.md b/docs/orion/o6/app-development/artificial-intelligence/GenAI/README.md
new file mode 100644
index 000000000..188efda92
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/GenAI/README.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 5
+---
+
+# 生成式 AI
+
+该部分主要演示部分有代表性的生成式 AI 在 Radxa O6 / O6N 上的部署。
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/GenAI/clip.md b/docs/orion/o6/app-development/artificial-intelligence/GenAI/clip.md
new file mode 100644
index 000000000..c84923929
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/GenAI/clip.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 3
+---
+
+import CLIP from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_clip.mdx';
+
+# CLIP
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/GenAI/ernie-4_5-0_3b_llama_cpp.md b/docs/orion/o6/app-development/artificial-intelligence/GenAI/ernie-4_5-0_3b_llama_cpp.md
new file mode 100644
index 000000000..78e959673
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/GenAI/ernie-4_5-0_3b_llama_cpp.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 1
+---
+
+import ERNIE4503BLLAMACPP from '../../../../../common/ai/\_ernie-4_5-0_3b_llama_cpp.mdx';
+
+# ERNIE 4.5-0.3B
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/GenAI/ernie-4_5-21b-a3b_llama_cpp.md b/docs/orion/o6/app-development/artificial-intelligence/GenAI/ernie-4_5-21b-a3b_llama_cpp.md
new file mode 100644
index 000000000..59b31c030
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/GenAI/ernie-4_5-21b-a3b_llama_cpp.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 2
+---
+
+import ERNIE4521BA3BLLAMACPP from '../../../../../common/ai/\_ernie-4_5-21b-a3b_llama_cpp.mdx';
+
+# ERNIE 4.5-21B-A3B
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/GenAI/sd-v1-4.md b/docs/orion/o6/app-development/artificial-intelligence/GenAI/sd-v1-4.md
new file mode 100644
index 000000000..900ae114f
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/GenAI/sd-v1-4.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 4
+---
+
+import SDv1_4 from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_sd-v1-4.mdx';
+
+# Stable Diffusion
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Multimodality/README.md b/docs/orion/o6/app-development/artificial-intelligence/Multimodality/README.md
new file mode 100644
index 000000000..509300594
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Multimodality/README.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 6
+---
+
+# 多模态模型
+
+该部分主要演示部分有代表性的多模态模型在 Radxa O6 / O6N 上的部署。
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Multimodality/qwen2-5-vl-3b.md b/docs/orion/o6/app-development/artificial-intelligence/Multimodality/qwen2-5-vl-3b.md
new file mode 100644
index 000000000..93f3d4ddc
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Multimodality/qwen2-5-vl-3b.md
@@ -0,0 +1,9 @@
+---
+sidebar-position: 2
+---
+
+import Qwen2_5vl from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_qwen2-5-vl-3b.mdx';
+
+# Qwen2.5 VL
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Multimodality/qwen2vl-2b.md b/docs/orion/o6/app-development/artificial-intelligence/Multimodality/qwen2vl-2b.md
new file mode 100644
index 000000000..9f11303b5
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Multimodality/qwen2vl-2b.md
@@ -0,0 +1,9 @@
+---
+sidebar-position: 1
+---
+
+import Qwen2_vl from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_qwen2vl-2b.mdx';
+
+# Qwen2 VL
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/README.md b/docs/orion/o6/app-development/artificial-intelligence/README.md
index 374a96aab..3224e0535 100644
--- a/docs/orion/o6/app-development/artificial-intelligence/README.md
+++ b/docs/orion/o6/app-development/artificial-intelligence/README.md
@@ -4,6 +4,8 @@ sidebar_position: 1
# 人工智能
-主要介绍使用 NPU SDK 进行人工智能硬件加速的应用开发
+瑞莎星睿 O6 / O6N 拥有高达 28.8 TOPS 的 NPU 算力,支持 INT4 / INT8 / INT16 / FP16 / BF16 和 TF32 类型的模型部署。
+
+接下来的文档将详细介绍基于此芯 P1 NPU SDK “ NeuralOne ” 的模型部署全流程,重点涵盖环境配置、模型编译与量化以及常见模型部署案例。
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/README.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/README.md
new file mode 100644
index 000000000..721181cc6
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/README.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 4
+---
+
+# 视觉模型
+
+该部分主要演示部分有代表性的视觉模型在 Radxa O6 / O6N 上的部署。
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/UFLD-v2.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/UFLD-v2.md
new file mode 100644
index 000000000..5d83e5a7d
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/UFLD-v2.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 7
+---
+
+import UFLDv2 from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_ultra-fast-lane-detection-v2.mdx";
+
+# UFLDv2
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/deeplab_v3.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/deeplab_v3.md
new file mode 100644
index 000000000..31c0237a7
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/deeplab_v3.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 6
+---
+
+import DeepLabV3 from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_deeplab-v3.mdx';
+
+# DeepLabV3
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/fast-scnn.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/fast-scnn.md
new file mode 100644
index 000000000..45281fe31
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/fast-scnn.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 5
+---
+
+import FastSCNN from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_fast-scnn.mdx";
+
+# Fast-SCNN
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/midas-v2.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/midas-v2.md
new file mode 100644
index 000000000..69498db97
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/midas-v2.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 12
+---
+
+import MiDas_v2 from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_midas-v2.mdx';
+
+# MiDas
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/mobilenet-v2.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/mobilenet-v2.md
new file mode 100644
index 000000000..be18277b9
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/mobilenet-v2.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 1
+---
+
+import Mobilenet_V2 from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_mobilenet-v2.mdx";
+
+# MobileNetV2
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/openpose.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/openpose.md
new file mode 100644
index 000000000..49ef704b0
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/openpose.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 8
+---
+
+import OpenPose from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_openpose.mdx';
+
+# OpenPose
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/pp-ocr-v4.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/pp-ocr-v4.md
new file mode 100644
index 000000000..c42e903c7
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/pp-ocr-v4.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 11
+---
+
+import PP_OCR_V4 from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_pp-ocr-v4.mdx";
+
+# PP-OCRv4
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/real-esrgan.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/real-esrgan.md
new file mode 100644
index 000000000..69262f447
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/real-esrgan.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 15
+---
+
+import Real_ESRGAN from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_real-esrgan.mdx';
+
+# Real-ESRGAN
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/resnet50.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/resnet50.md
new file mode 100644
index 000000000..0b8fff1ec
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/resnet50.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 3
+---
+
+import ResNet50 from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_resnet50.mdx';
+
+# ResNet50
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/scrfd-arcface.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/scrfd-arcface.md
new file mode 100644
index 000000000..fdfbc682e
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/scrfd-arcface.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 10
+---
+
+import SCRFD_ArcFace from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_scrfd-arcface.mdx";
+
+# SCRFD-ArcFace
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/vdsr.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/vdsr.md
new file mode 100644
index 000000000..6a8321cee
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/vdsr.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 14
+---
+
+import VDSR from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_vdsr.mdx';
+
+# VDSR
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/yolov8n.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/yolov8n.md
new file mode 100644
index 000000000..bb0f16d55
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/yolov8n.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 4
+---
+
+import YOLOv8n from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_yolov8n.mdx";
+
+# YOLOv8n
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/Vision/yolov8s-pose.md b/docs/orion/o6/app-development/artificial-intelligence/Vision/yolov8s-pose.md
new file mode 100644
index 000000000..3c2ededac
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/Vision/yolov8s-pose.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 9
+---
+
+import YOLOv8s_pose from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_yolov8s-pose.mdx";
+
+# YOLOv8s-pose
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/ai-hub.md b/docs/orion/o6/app-development/artificial-intelligence/ai-hub.md
index b472ecd89..4adad6eda 100644
--- a/docs/orion/o6/app-development/artificial-intelligence/ai-hub.md
+++ b/docs/orion/o6/app-development/artificial-intelligence/ai-hub.md
@@ -4,6 +4,6 @@ sidebar_position: 2
import AI_Hub from '../../../../common/orion-common/app-dev/artificial-intelligence/\_ai-hub.mdx';
-# CIX AI Model HUb
+# AI Model Hub
diff --git a/docs/orion/o6/app-development/artificial-intelligence/deeplab_v3.md b/docs/orion/o6/app-development/artificial-intelligence/deeplab_v3.md
deleted file mode 100644
index d327f4de5..000000000
--- a/docs/orion/o6/app-development/artificial-intelligence/deeplab_v3.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 6
----
-
-import DeepLabv3 from '../../../../common/orion-common/app-dev/artificial-intelligence/\_deeplab_v3.mdx';
-
-# DeepLabv3 完整示例
-
-
diff --git a/docs/orion/o6/app-development/artificial-intelligence/env-setup.md b/docs/orion/o6/app-development/artificial-intelligence/env-setup.md
new file mode 100644
index 000000000..37f7d9683
--- /dev/null
+++ b/docs/orion/o6/app-development/artificial-intelligence/env-setup.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 1
+---
+
+import Env_Setup from "../../../../common/orion-common/app-dev/artificial-intelligence/\_env-setup.mdx";
+
+# 环境配置
+
+
diff --git a/docs/orion/o6/app-development/artificial-intelligence/ernie-4_5-0_3b_llama_cpp.md b/docs/orion/o6/app-development/artificial-intelligence/ernie-4_5-0_3b_llama_cpp.md
deleted file mode 100644
index 39f5e043d..000000000
--- a/docs/orion/o6/app-development/artificial-intelligence/ernie-4_5-0_3b_llama_cpp.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 10
----
-
-import ERNIE4503BLLAMACPP from '../../../../common/ai/\_ernie-4_5-0_3b_llama_cpp.mdx';
-
-# ERNIE-4.5-0.3B
-
-
diff --git a/docs/orion/o6/app-development/artificial-intelligence/ernie-4_5-21b-a3b_llama_cpp.md b/docs/orion/o6/app-development/artificial-intelligence/ernie-4_5-21b-a3b_llama_cpp.md
deleted file mode 100644
index 858a5912e..000000000
--- a/docs/orion/o6/app-development/artificial-intelligence/ernie-4_5-21b-a3b_llama_cpp.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 11
----
-
-import ERNIE4521BA3BLLAMACPP from '../../../../common/ai/\_ernie-4_5-21b-a3b_llama_cpp.mdx';
-
-# ERNIE-4.5-21B-A3B
-
-
diff --git a/docs/orion/o6/app-development/artificial-intelligence/llama_cpp.md b/docs/orion/o6/app-development/artificial-intelligence/llama_cpp.md
index a86eb57ce..3bccc3ba6 100644
--- a/docs/orion/o6/app-development/artificial-intelligence/llama_cpp.md
+++ b/docs/orion/o6/app-development/artificial-intelligence/llama_cpp.md
@@ -1,9 +1,9 @@
---
-sidebar_position: 8
+sidebar_position: 7
---
import Llamacpp from '../../../../common/ai/\_llama_cpp.mdx';
-# Llama.cpp
+# llama.cpp
diff --git a/docs/orion/o6/app-development/artificial-intelligence/npu-introduction.md b/docs/orion/o6/app-development/artificial-intelligence/npu-introduction.md
deleted file mode 100644
index 9ad41f153..000000000
--- a/docs/orion/o6/app-development/artificial-intelligence/npu-introduction.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 1
----
-
-import NPU_Installation from '../../../../common/orion-common/app-dev/artificial-intelligence/\_npu-introduction.mdx';
-
-# NPU SDK 安装
-
-
diff --git a/docs/orion/o6/app-development/artificial-intelligence/ollama.md b/docs/orion/o6/app-development/artificial-intelligence/ollama.md
index 78d030253..e3ea417af 100644
--- a/docs/orion/o6/app-development/artificial-intelligence/ollama.md
+++ b/docs/orion/o6/app-development/artificial-intelligence/ollama.md
@@ -1,5 +1,5 @@
---
-sidebar_position: 9
+sidebar_position: 8
---
import Ollama from '../../../../common/ai/\_ollama.mdx';
diff --git a/docs/orion/o6/app-development/artificial-intelligence/openpose.md b/docs/orion/o6/app-development/artificial-intelligence/openpose.md
deleted file mode 100644
index 6a079a602..000000000
--- a/docs/orion/o6/app-development/artificial-intelligence/openpose.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 5
----
-
-import OpenPose from '../../../../common/orion-common/app-dev/artificial-intelligence/\_openpose.mdx';
-
-# OpenPose 完整示例
-
-
diff --git a/docs/orion/o6/app-development/artificial-intelligence/resnet50.md b/docs/orion/o6/app-development/artificial-intelligence/resnet50.md
deleted file mode 100644
index 570f208a6..000000000
--- a/docs/orion/o6/app-development/artificial-intelligence/resnet50.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 3
----
-
-import ResNet50 from '../../../../common/orion-common/app-dev/artificial-intelligence/\_resnet50.mdx';
-
-# ResNet50 完整示例
-
-
diff --git a/docs/orion/o6/app-development/artificial-intelligence/vdsr.md b/docs/orion/o6/app-development/artificial-intelligence/vdsr.md
deleted file mode 100644
index ff4f34ac8..000000000
--- a/docs/orion/o6/app-development/artificial-intelligence/vdsr.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 7
----
-
-import VDSR from '../../../../common/orion-common/app-dev/artificial-intelligence/\_vdsr.mdx';
-
-# VDSR 完整示例
-
-
diff --git a/docs/orion/o6/app-development/artificial-intelligence/yolov8.md b/docs/orion/o6/app-development/artificial-intelligence/yolov8.md
deleted file mode 100644
index a3cc0b7c4..000000000
--- a/docs/orion/o6/app-development/artificial-intelligence/yolov8.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 4
----
-
-import YOLOv8 from '../../../../common/orion-common/app-dev/artificial-intelligence/\_yolov8.mdx';
-
-# YOLOv8 完整示例
-
-
diff --git a/docs/orion/o6n/app-development/README.md b/docs/orion/o6n/app-development/README.md
index 0395fd871..042e8be0f 100644
--- a/docs/orion/o6n/app-development/README.md
+++ b/docs/orion/o6n/app-development/README.md
@@ -4,6 +4,6 @@ sidebar_position: 50
# 应用开发
-主要介绍上层应用开发,比如 NPU 应用开发等
+主要介绍上层应用开发,比如 NPU 应用开发等。
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/API-manual.md b/docs/orion/o6n/app-development/artificial-intelligence/API-manual.md
new file mode 100644
index 000000000..cd65a3a1c
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/API-manual.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 9
+---
+
+import API_Manual from "../../../../common/orion-common/app-dev/artificial-intelligence/\_API-manual.mdx";
+
+# API 手册
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Audio/README.md b/docs/orion/o6n/app-development/artificial-intelligence/Audio/README.md
new file mode 100644
index 000000000..c9aacd55b
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Audio/README.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 3
+---
+
+# 语音模型
+
+该部分主要演示部分有代表性的**语音模型**在 Radxa O6 / O6N 上的部署。
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Audio/whisper-medium.md b/docs/orion/o6n/app-development/artificial-intelligence/Audio/whisper-medium.md
new file mode 100644
index 000000000..4911423ba
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Audio/whisper-medium.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 1
+---
+
+import Whisper_Medium from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_whisper-medium.mdx";
+
+# Whisper
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/GenAI/README.md b/docs/orion/o6n/app-development/artificial-intelligence/GenAI/README.md
new file mode 100644
index 000000000..188efda92
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/GenAI/README.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 5
+---
+
+# 生成式 AI
+
+该部分主要演示部分有代表性的生成式 AI 在 Radxa O6 / O6N 上的部署。
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/GenAI/clip.md b/docs/orion/o6n/app-development/artificial-intelligence/GenAI/clip.md
new file mode 100644
index 000000000..c84923929
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/GenAI/clip.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 3
+---
+
+import CLIP from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_clip.mdx';
+
+# CLIP
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/GenAI/ernie-4_5-0_3b_llama_cpp.md b/docs/orion/o6n/app-development/artificial-intelligence/GenAI/ernie-4_5-0_3b_llama_cpp.md
new file mode 100644
index 000000000..78e959673
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/GenAI/ernie-4_5-0_3b_llama_cpp.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 1
+---
+
+import ERNIE4503BLLAMACPP from '../../../../../common/ai/\_ernie-4_5-0_3b_llama_cpp.mdx';
+
+# ERNIE 4.5-0.3B
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/GenAI/ernie-4_5-21b-a3b_llama_cpp.md b/docs/orion/o6n/app-development/artificial-intelligence/GenAI/ernie-4_5-21b-a3b_llama_cpp.md
new file mode 100644
index 000000000..59b31c030
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/GenAI/ernie-4_5-21b-a3b_llama_cpp.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 2
+---
+
+import ERNIE4521BA3BLLAMACPP from '../../../../../common/ai/\_ernie-4_5-21b-a3b_llama_cpp.mdx';
+
+# ERNIE 4.5-21B-A3B
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/GenAI/sd-v1-4.md b/docs/orion/o6n/app-development/artificial-intelligence/GenAI/sd-v1-4.md
new file mode 100644
index 000000000..900ae114f
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/GenAI/sd-v1-4.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 4
+---
+
+import SDv1_4 from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_sd-v1-4.mdx';
+
+# Stable Diffusion
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Multimodality/README.md b/docs/orion/o6n/app-development/artificial-intelligence/Multimodality/README.md
new file mode 100644
index 000000000..509300594
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Multimodality/README.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 6
+---
+
+# 多模态模型
+
+该部分主要演示部分有代表性的多模态模型在 Radxa O6 / O6N 上的部署。
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Multimodality/qwen2-5-vl-3b.md b/docs/orion/o6n/app-development/artificial-intelligence/Multimodality/qwen2-5-vl-3b.md
new file mode 100644
index 000000000..93f3d4ddc
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Multimodality/qwen2-5-vl-3b.md
@@ -0,0 +1,9 @@
+---
+sidebar-position: 2
+---
+
+import Qwen2_5vl from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_qwen2-5-vl-3b.mdx';
+
+# Qwen2.5 VL
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Multimodality/qwen2vl-2b.md b/docs/orion/o6n/app-development/artificial-intelligence/Multimodality/qwen2vl-2b.md
new file mode 100644
index 000000000..8b611f297
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Multimodality/qwen2vl-2b.md
@@ -0,0 +1,9 @@
+---
+sidebar-position: 1
+---
+
+import Qwen2vl from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_qwen2vl-2b.mdx';
+
+# Qwen2 VL
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/README.md b/docs/orion/o6n/app-development/artificial-intelligence/README.md
index 374a96aab..3224e0535 100644
--- a/docs/orion/o6n/app-development/artificial-intelligence/README.md
+++ b/docs/orion/o6n/app-development/artificial-intelligence/README.md
@@ -4,6 +4,8 @@ sidebar_position: 1
# 人工智能
-主要介绍使用 NPU SDK 进行人工智能硬件加速的应用开发
+瑞莎星睿 O6 / O6N 拥有高达 28.8 TOPS 的 NPU 算力,支持 INT4 / INT8 / INT16 / FP16 / BF16 和 TF32 类型的模型部署。
+
+接下来的文档将详细介绍基于此芯 P1 NPU SDK “ NeuralOne ” 的模型部署全流程,重点涵盖环境配置、模型编译与量化以及常见模型部署案例。
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/README.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/README.md
new file mode 100644
index 000000000..721181cc6
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/README.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 4
+---
+
+# 视觉模型
+
+该部分主要演示部分有代表性的视觉模型在 Radxa O6 / O6N 上的部署。
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/UFLD-v2.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/UFLD-v2.md
new file mode 100644
index 000000000..5d83e5a7d
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/UFLD-v2.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 7
+---
+
+import UFLDv2 from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_ultra-fast-lane-detection-v2.mdx";
+
+# UFLDv2
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/deeplab_v3.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/deeplab_v3.md
new file mode 100644
index 000000000..31c0237a7
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/deeplab_v3.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 6
+---
+
+import DeepLabV3 from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_deeplab-v3.mdx';
+
+# DeepLabV3
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/fast-scnn.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/fast-scnn.md
new file mode 100644
index 000000000..45281fe31
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/fast-scnn.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 5
+---
+
+import FastSCNN from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_fast-scnn.mdx";
+
+# Fast-SCNN
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/midas-v2.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/midas-v2.md
new file mode 100644
index 000000000..69498db97
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/midas-v2.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 12
+---
+
+import MiDas_v2 from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_midas-v2.mdx';
+
+# MiDas
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/mobilenet-v2.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/mobilenet-v2.md
new file mode 100644
index 000000000..be18277b9
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/mobilenet-v2.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 1
+---
+
+import Mobilenet_V2 from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_mobilenet-v2.mdx";
+
+# MobileNetV2
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/openpose.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/openpose.md
new file mode 100644
index 000000000..49ef704b0
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/openpose.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 8
+---
+
+import OpenPose from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_openpose.mdx';
+
+# OpenPose
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/pp-ocr-v4.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/pp-ocr-v4.md
new file mode 100644
index 000000000..c42e903c7
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/pp-ocr-v4.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 11
+---
+
+import PP_OCR_V4 from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_pp-ocr-v4.mdx";
+
+# PP-OCRv4
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/real-esrgan.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/real-esrgan.md
new file mode 100644
index 000000000..69262f447
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/real-esrgan.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 15
+---
+
+import Real_ESRGAN from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_real-esrgan.mdx';
+
+# Real-ESRGAN
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/resnet50.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/resnet50.md
new file mode 100644
index 000000000..0b8fff1ec
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/resnet50.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 3
+---
+
+import ResNet50 from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_resnet50.mdx';
+
+# ResNet50
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/scrfd-arcface.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/scrfd-arcface.md
new file mode 100644
index 000000000..fdfbc682e
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/scrfd-arcface.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 10
+---
+
+import SCRFD_ArcFace from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_scrfd-arcface.mdx";
+
+# SCRFD-ArcFace
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/vdsr.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/vdsr.md
new file mode 100644
index 000000000..6a8321cee
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/vdsr.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 14
+---
+
+import VDSR from '../../../../../common/orion-common/app-dev/artificial-intelligence/\_vdsr.mdx';
+
+# VDSR
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/yolov8n.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/yolov8n.md
new file mode 100644
index 000000000..bb0f16d55
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/yolov8n.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 4
+---
+
+import YOLOv8n from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_yolov8n.mdx";
+
+# YOLOv8n
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/Vision/yolov8s-pose.md b/docs/orion/o6n/app-development/artificial-intelligence/Vision/yolov8s-pose.md
new file mode 100644
index 000000000..3c2ededac
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/Vision/yolov8s-pose.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 9
+---
+
+import YOLOv8s_pose from "../../../../../common/orion-common/app-dev/artificial-intelligence/\_yolov8s-pose.mdx";
+
+# YOLOv8s-pose
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/ai-hub.md b/docs/orion/o6n/app-development/artificial-intelligence/ai-hub.md
index b472ecd89..4adad6eda 100644
--- a/docs/orion/o6n/app-development/artificial-intelligence/ai-hub.md
+++ b/docs/orion/o6n/app-development/artificial-intelligence/ai-hub.md
@@ -4,6 +4,6 @@ sidebar_position: 2
import AI_Hub from '../../../../common/orion-common/app-dev/artificial-intelligence/\_ai-hub.mdx';
-# CIX AI Model HUb
+# AI Model Hub
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/deeplab_v3.md b/docs/orion/o6n/app-development/artificial-intelligence/deeplab_v3.md
deleted file mode 100644
index d327f4de5..000000000
--- a/docs/orion/o6n/app-development/artificial-intelligence/deeplab_v3.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 6
----
-
-import DeepLabv3 from '../../../../common/orion-common/app-dev/artificial-intelligence/\_deeplab_v3.mdx';
-
-# DeepLabv3 完整示例
-
-
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/env-setup.md b/docs/orion/o6n/app-development/artificial-intelligence/env-setup.md
new file mode 100644
index 000000000..37f7d9683
--- /dev/null
+++ b/docs/orion/o6n/app-development/artificial-intelligence/env-setup.md
@@ -0,0 +1,9 @@
+---
+sidebar_position: 1
+---
+
+import Env_Setup from "../../../../common/orion-common/app-dev/artificial-intelligence/\_env-setup.mdx";
+
+# 环境配置
+
+
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/ernie-4_5-0_3b_llama_cpp.md b/docs/orion/o6n/app-development/artificial-intelligence/ernie-4_5-0_3b_llama_cpp.md
deleted file mode 100644
index 39f5e043d..000000000
--- a/docs/orion/o6n/app-development/artificial-intelligence/ernie-4_5-0_3b_llama_cpp.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 10
----
-
-import ERNIE4503BLLAMACPP from '../../../../common/ai/\_ernie-4_5-0_3b_llama_cpp.mdx';
-
-# ERNIE-4.5-0.3B
-
-
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/ernie-4_5-21b-a3b_llama_cpp.md b/docs/orion/o6n/app-development/artificial-intelligence/ernie-4_5-21b-a3b_llama_cpp.md
deleted file mode 100644
index 858a5912e..000000000
--- a/docs/orion/o6n/app-development/artificial-intelligence/ernie-4_5-21b-a3b_llama_cpp.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 11
----
-
-import ERNIE4521BA3BLLAMACPP from '../../../../common/ai/\_ernie-4_5-21b-a3b_llama_cpp.mdx';
-
-# ERNIE-4.5-21B-A3B
-
-
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/llama_cpp.md b/docs/orion/o6n/app-development/artificial-intelligence/llama_cpp.md
index a86eb57ce..3bccc3ba6 100644
--- a/docs/orion/o6n/app-development/artificial-intelligence/llama_cpp.md
+++ b/docs/orion/o6n/app-development/artificial-intelligence/llama_cpp.md
@@ -1,9 +1,9 @@
---
-sidebar_position: 8
+sidebar_position: 7
---
import Llamacpp from '../../../../common/ai/\_llama_cpp.mdx';
-# Llama.cpp
+# llama.cpp
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/npu-introduction.md b/docs/orion/o6n/app-development/artificial-intelligence/npu-introduction.md
deleted file mode 100644
index 9ad41f153..000000000
--- a/docs/orion/o6n/app-development/artificial-intelligence/npu-introduction.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 1
----
-
-import NPU_Installation from '../../../../common/orion-common/app-dev/artificial-intelligence/\_npu-introduction.mdx';
-
-# NPU SDK 安装
-
-
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/ollama.md b/docs/orion/o6n/app-development/artificial-intelligence/ollama.md
index 78d030253..e3ea417af 100644
--- a/docs/orion/o6n/app-development/artificial-intelligence/ollama.md
+++ b/docs/orion/o6n/app-development/artificial-intelligence/ollama.md
@@ -1,5 +1,5 @@
---
-sidebar_position: 9
+sidebar_position: 8
---
import Ollama from '../../../../common/ai/\_ollama.mdx';
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/openpose.md b/docs/orion/o6n/app-development/artificial-intelligence/openpose.md
deleted file mode 100644
index 6a079a602..000000000
--- a/docs/orion/o6n/app-development/artificial-intelligence/openpose.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 5
----
-
-import OpenPose from '../../../../common/orion-common/app-dev/artificial-intelligence/\_openpose.mdx';
-
-# OpenPose 完整示例
-
-
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/resnet50.md b/docs/orion/o6n/app-development/artificial-intelligence/resnet50.md
deleted file mode 100644
index 570f208a6..000000000
--- a/docs/orion/o6n/app-development/artificial-intelligence/resnet50.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 3
----
-
-import ResNet50 from '../../../../common/orion-common/app-dev/artificial-intelligence/\_resnet50.mdx';
-
-# ResNet50 完整示例
-
-
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/vdsr.md b/docs/orion/o6n/app-development/artificial-intelligence/vdsr.md
deleted file mode 100644
index ff4f34ac8..000000000
--- a/docs/orion/o6n/app-development/artificial-intelligence/vdsr.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 7
----
-
-import VDSR from '../../../../common/orion-common/app-dev/artificial-intelligence/\_vdsr.mdx';
-
-# VDSR 完整示例
-
-
diff --git a/docs/orion/o6n/app-development/artificial-intelligence/yolov8.md b/docs/orion/o6n/app-development/artificial-intelligence/yolov8.md
deleted file mode 100644
index a3cc0b7c4..000000000
--- a/docs/orion/o6n/app-development/artificial-intelligence/yolov8.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-sidebar_position: 4
----
-
-import YOLOv8 from '../../../../common/orion-common/app-dev/artificial-intelligence/\_yolov8.mdx';
-
-# YOLOv8 完整示例
-
-
diff --git a/static/img/orion/o6/ai-models/UFLD-host-out1.webp b/static/img/orion/o6/ai-models/UFLD-host-out1.webp
new file mode 100644
index 000000000..7af0e6dca
Binary files /dev/null and b/static/img/orion/o6/ai-models/UFLD-host-out1.webp differ
diff --git a/static/img/orion/o6/ai-models/UFLD-host-out2.webp b/static/img/orion/o6/ai-models/UFLD-host-out2.webp
new file mode 100644
index 000000000..e218debc9
Binary files /dev/null and b/static/img/orion/o6/ai-models/UFLD-host-out2.webp differ
diff --git a/static/img/orion/o6/ai-models/UFLD-host-out3.webp b/static/img/orion/o6/ai-models/UFLD-host-out3.webp
new file mode 100644
index 000000000..5e95dabaa
Binary files /dev/null and b/static/img/orion/o6/ai-models/UFLD-host-out3.webp differ
diff --git a/static/img/orion/o6/ai-models/UFLD-npu-out1.webp b/static/img/orion/o6/ai-models/UFLD-npu-out1.webp
new file mode 100644
index 000000000..2c0a168da
Binary files /dev/null and b/static/img/orion/o6/ai-models/UFLD-npu-out1.webp differ
diff --git a/static/img/orion/o6/ai-models/UFLD-npu-out2.webp b/static/img/orion/o6/ai-models/UFLD-npu-out2.webp
new file mode 100644
index 000000000..f1709c530
Binary files /dev/null and b/static/img/orion/o6/ai-models/UFLD-npu-out2.webp differ
diff --git a/static/img/orion/o6/ai-models/UFLD-npu-out3.webp b/static/img/orion/o6/ai-models/UFLD-npu-out3.webp
new file mode 100644
index 000000000..519519c67
Binary files /dev/null and b/static/img/orion/o6/ai-models/UFLD-npu-out3.webp differ
diff --git a/static/img/orion/o6/ai-models/clip-test-bird.webp b/static/img/orion/o6/ai-models/clip-test-bird.webp
new file mode 100644
index 000000000..4dffe3245
Binary files /dev/null and b/static/img/orion/o6/ai-models/clip-test-bird.webp differ
diff --git a/static/img/orion/o6/ai-models/clip-test-dog.webp b/static/img/orion/o6/ai-models/clip-test-dog.webp
new file mode 100644
index 000000000..e603479f7
Binary files /dev/null and b/static/img/orion/o6/ai-models/clip-test-dog.webp differ
diff --git a/static/img/orion/o6/ai-models/clip-test-person.webp b/static/img/orion/o6/ai-models/clip-test-person.webp
new file mode 100644
index 000000000..6f4d0556c
Binary files /dev/null and b/static/img/orion/o6/ai-models/clip-test-person.webp differ
diff --git a/static/img/orion/o6/deeplab1.webp b/static/img/orion/o6/ai-models/deeplab1.webp
similarity index 100%
rename from static/img/orion/o6/deeplab1.webp
rename to static/img/orion/o6/ai-models/deeplab1.webp
diff --git a/static/img/orion/o6/deeplab2.webp b/static/img/orion/o6/ai-models/deeplab2.webp
similarity index 100%
rename from static/img/orion/o6/deeplab2.webp
rename to static/img/orion/o6/ai-models/deeplab2.webp
diff --git a/static/img/orion/o6/ai-models/fast-scnn-host-out.webp b/static/img/orion/o6/ai-models/fast-scnn-host-out.webp
new file mode 100644
index 000000000..f5669471e
Binary files /dev/null and b/static/img/orion/o6/ai-models/fast-scnn-host-out.webp differ
diff --git a/static/img/orion/o6/ai-models/fast-scnn-npu-out.webp b/static/img/orion/o6/ai-models/fast-scnn-npu-out.webp
new file mode 100644
index 000000000..c6de8ade0
Binary files /dev/null and b/static/img/orion/o6/ai-models/fast-scnn-npu-out.webp differ
diff --git a/static/img/orion/o6/ai-models/midas-v2-host-out1.webp b/static/img/orion/o6/ai-models/midas-v2-host-out1.webp
new file mode 100644
index 000000000..4633b00d8
Binary files /dev/null and b/static/img/orion/o6/ai-models/midas-v2-host-out1.webp differ
diff --git a/static/img/orion/o6/ai-models/midas-v2-host-out2.webp b/static/img/orion/o6/ai-models/midas-v2-host-out2.webp
new file mode 100644
index 000000000..bdd0f5b8c
Binary files /dev/null and b/static/img/orion/o6/ai-models/midas-v2-host-out2.webp differ
diff --git a/static/img/orion/o6/ai-models/midas-v2-host-out3.webp b/static/img/orion/o6/ai-models/midas-v2-host-out3.webp
new file mode 100644
index 000000000..dd2cd5a9d
Binary files /dev/null and b/static/img/orion/o6/ai-models/midas-v2-host-out3.webp differ
diff --git a/static/img/orion/o6/ai-models/midas-v2-npu-out1.webp b/static/img/orion/o6/ai-models/midas-v2-npu-out1.webp
new file mode 100644
index 000000000..330b1ff4d
Binary files /dev/null and b/static/img/orion/o6/ai-models/midas-v2-npu-out1.webp differ
diff --git a/static/img/orion/o6/ai-models/midas-v2-npu-out2.webp b/static/img/orion/o6/ai-models/midas-v2-npu-out2.webp
new file mode 100644
index 000000000..9bef143e4
Binary files /dev/null and b/static/img/orion/o6/ai-models/midas-v2-npu-out2.webp differ
diff --git a/static/img/orion/o6/ai-models/midas-v2-npu-out3.webp b/static/img/orion/o6/ai-models/midas-v2-npu-out3.webp
new file mode 100644
index 000000000..9e46314e0
Binary files /dev/null and b/static/img/orion/o6/ai-models/midas-v2-npu-out3.webp differ
diff --git a/static/img/orion/o6/openpose_npu1.webp b/static/img/orion/o6/ai-models/openpose_npu1.webp
similarity index 100%
rename from static/img/orion/o6/openpose_npu1.webp
rename to static/img/orion/o6/ai-models/openpose_npu1.webp
diff --git a/static/img/orion/o6/openpose_npu2.webp b/static/img/orion/o6/ai-models/openpose_npu2.webp
similarity index 100%
rename from static/img/orion/o6/openpose_npu2.webp
rename to static/img/orion/o6/ai-models/openpose_npu2.webp
diff --git a/static/img/orion/o6/openpose_onnx1.webp b/static/img/orion/o6/ai-models/openpose_onnx1.webp
similarity index 100%
rename from static/img/orion/o6/openpose_onnx1.webp
rename to static/img/orion/o6/ai-models/openpose_onnx1.webp
diff --git a/static/img/orion/o6/openpose_onnx2.webp b/static/img/orion/o6/ai-models/openpose_onnx2.webp
similarity index 100%
rename from static/img/orion/o6/openpose_onnx2.webp
rename to static/img/orion/o6/ai-models/openpose_onnx2.webp
diff --git a/static/img/orion/o6/ai-models/pp-ocr-v4-host-out1.webp b/static/img/orion/o6/ai-models/pp-ocr-v4-host-out1.webp
new file mode 100644
index 000000000..1bd4bcc48
Binary files /dev/null and b/static/img/orion/o6/ai-models/pp-ocr-v4-host-out1.webp differ
diff --git a/static/img/orion/o6/ai-models/pp-ocr-v4-host-out2.webp b/static/img/orion/o6/ai-models/pp-ocr-v4-host-out2.webp
new file mode 100644
index 000000000..45ce88819
Binary files /dev/null and b/static/img/orion/o6/ai-models/pp-ocr-v4-host-out2.webp differ
diff --git a/static/img/orion/o6/ai-models/pp-ocr-v4-host-out3.webp b/static/img/orion/o6/ai-models/pp-ocr-v4-host-out3.webp
new file mode 100644
index 000000000..000e2c9d4
Binary files /dev/null and b/static/img/orion/o6/ai-models/pp-ocr-v4-host-out3.webp differ
diff --git a/static/img/orion/o6/ai-models/pp-ocr-v4-npu-out1.webp b/static/img/orion/o6/ai-models/pp-ocr-v4-npu-out1.webp
new file mode 100644
index 000000000..e7651176d
Binary files /dev/null and b/static/img/orion/o6/ai-models/pp-ocr-v4-npu-out1.webp differ
diff --git a/static/img/orion/o6/ai-models/pp-ocr-v4-npu-out2.webp b/static/img/orion/o6/ai-models/pp-ocr-v4-npu-out2.webp
new file mode 100644
index 000000000..da8e6b9e1
Binary files /dev/null and b/static/img/orion/o6/ai-models/pp-ocr-v4-npu-out2.webp differ
diff --git a/static/img/orion/o6/ai-models/pp-ocr-v4-npu-out3.webp b/static/img/orion/o6/ai-models/pp-ocr-v4-npu-out3.webp
new file mode 100644
index 000000000..36f3841f7
Binary files /dev/null and b/static/img/orion/o6/ai-models/pp-ocr-v4-npu-out3.webp differ
diff --git a/static/img/orion/o6/ai-models/real-esrgan-host-out.webp b/static/img/orion/o6/ai-models/real-esrgan-host-out.webp
new file mode 100644
index 000000000..1ac108b5b
Binary files /dev/null and b/static/img/orion/o6/ai-models/real-esrgan-host-out.webp differ
diff --git a/static/img/orion/o6/ai-models/real-esrgan-npu-out.webp b/static/img/orion/o6/ai-models/real-esrgan-npu-out.webp
new file mode 100644
index 000000000..03f2289e6
Binary files /dev/null and b/static/img/orion/o6/ai-models/real-esrgan-npu-out.webp differ
diff --git a/static/img/orion/o6/ai-models/scrfd-arcface-host-out1.webp b/static/img/orion/o6/ai-models/scrfd-arcface-host-out1.webp
new file mode 100644
index 000000000..94dc03a57
Binary files /dev/null and b/static/img/orion/o6/ai-models/scrfd-arcface-host-out1.webp differ
diff --git a/static/img/orion/o6/ai-models/scrfd-arcface-host-out2.webp b/static/img/orion/o6/ai-models/scrfd-arcface-host-out2.webp
new file mode 100644
index 000000000..824a58aba
Binary files /dev/null and b/static/img/orion/o6/ai-models/scrfd-arcface-host-out2.webp differ
diff --git a/static/img/orion/o6/ai-models/scrfd-arcface-npu-out1.webp b/static/img/orion/o6/ai-models/scrfd-arcface-npu-out1.webp
new file mode 100644
index 000000000..5c602fde6
Binary files /dev/null and b/static/img/orion/o6/ai-models/scrfd-arcface-npu-out1.webp differ
diff --git a/static/img/orion/o6/ai-models/scrfd-arcface-npu-out2.webp b/static/img/orion/o6/ai-models/scrfd-arcface-npu-out2.webp
new file mode 100644
index 000000000..0268482cb
Binary files /dev/null and b/static/img/orion/o6/ai-models/scrfd-arcface-npu-out2.webp differ
diff --git a/static/img/orion/o6/ai-models/sd-host.webp b/static/img/orion/o6/ai-models/sd-host.webp
new file mode 100644
index 000000000..a94d6a7e8
Binary files /dev/null and b/static/img/orion/o6/ai-models/sd-host.webp differ
diff --git a/static/img/orion/o6/ai-models/sd-npu.webp b/static/img/orion/o6/ai-models/sd-npu.webp
new file mode 100644
index 000000000..e2eadb86e
Binary files /dev/null and b/static/img/orion/o6/ai-models/sd-npu.webp differ
diff --git a/static/img/orion/o6/vdsr_npu.webp b/static/img/orion/o6/ai-models/vdsr_npu.webp
similarity index 100%
rename from static/img/orion/o6/vdsr_npu.webp
rename to static/img/orion/o6/ai-models/vdsr_npu.webp
diff --git a/static/img/orion/o6/vdsr_onnx.webp b/static/img/orion/o6/ai-models/vdsr_onnx.webp
similarity index 100%
rename from static/img/orion/o6/vdsr_onnx.webp
rename to static/img/orion/o6/ai-models/vdsr_onnx.webp
diff --git a/static/img/orion/o6/ai-models/yolov8n-host-out1.webp b/static/img/orion/o6/ai-models/yolov8n-host-out1.webp
new file mode 100644
index 000000000..9f8ea90d1
Binary files /dev/null and b/static/img/orion/o6/ai-models/yolov8n-host-out1.webp differ
diff --git a/static/img/orion/o6/ai-models/yolov8n-host-out2.webp b/static/img/orion/o6/ai-models/yolov8n-host-out2.webp
new file mode 100644
index 000000000..63fcb4428
Binary files /dev/null and b/static/img/orion/o6/ai-models/yolov8n-host-out2.webp differ
diff --git a/static/img/orion/o6/ai-models/yolov8n-npu-out1.webp b/static/img/orion/o6/ai-models/yolov8n-npu-out1.webp
new file mode 100644
index 000000000..4be25e483
Binary files /dev/null and b/static/img/orion/o6/ai-models/yolov8n-npu-out1.webp differ
diff --git a/static/img/orion/o6/ai-models/yolov8n-npu-out2.webp b/static/img/orion/o6/ai-models/yolov8n-npu-out2.webp
new file mode 100644
index 000000000..3a1816682
Binary files /dev/null and b/static/img/orion/o6/ai-models/yolov8n-npu-out2.webp differ
diff --git a/static/img/orion/o6/ai-models/yolov8s-pose-host-out1.webp b/static/img/orion/o6/ai-models/yolov8s-pose-host-out1.webp
new file mode 100644
index 000000000..3437a5838
Binary files /dev/null and b/static/img/orion/o6/ai-models/yolov8s-pose-host-out1.webp differ
diff --git a/static/img/orion/o6/ai-models/yolov8s-pose-host-out2.webp b/static/img/orion/o6/ai-models/yolov8s-pose-host-out2.webp
new file mode 100644
index 000000000..b4d70d167
Binary files /dev/null and b/static/img/orion/o6/ai-models/yolov8s-pose-host-out2.webp differ
diff --git a/static/img/orion/o6/ai-models/yolov8s-pose-npu-out1.webp b/static/img/orion/o6/ai-models/yolov8s-pose-npu-out1.webp
new file mode 100644
index 000000000..ab6ae7f72
Binary files /dev/null and b/static/img/orion/o6/ai-models/yolov8s-pose-npu-out1.webp differ
diff --git a/static/img/orion/o6/ai-models/yolov8s-pose-npu-out2.webp b/static/img/orion/o6/ai-models/yolov8s-pose-npu-out2.webp
new file mode 100644
index 000000000..d0da5b330
Binary files /dev/null and b/static/img/orion/o6/ai-models/yolov8s-pose-npu-out2.webp differ
diff --git a/static/img/orion/o6/yolov8_npu1.webp b/static/img/orion/o6/yolov8_npu1.webp
deleted file mode 100644
index 4f5aa5265..000000000
Binary files a/static/img/orion/o6/yolov8_npu1.webp and /dev/null differ
diff --git a/static/img/orion/o6/yolov8_npu2.webp b/static/img/orion/o6/yolov8_npu2.webp
deleted file mode 100644
index 1db5b1927..000000000
Binary files a/static/img/orion/o6/yolov8_npu2.webp and /dev/null differ
diff --git a/static/img/orion/o6/yolov8_onnx1.webp b/static/img/orion/o6/yolov8_onnx1.webp
deleted file mode 100644
index 55b33feee..000000000
Binary files a/static/img/orion/o6/yolov8_onnx1.webp and /dev/null differ
diff --git a/static/img/orion/o6/yolov8_onnx2.webp b/static/img/orion/o6/yolov8_onnx2.webp
deleted file mode 100644
index 254779a9c..000000000
Binary files a/static/img/orion/o6/yolov8_onnx2.webp and /dev/null differ