|
| 1 | +# XPU - Intel® GPUs |
| 2 | + |
| 3 | +## Validated Hardware |
| 4 | + |
| 5 | +| Hardware | |
| 6 | +| ----------------------------------------- | |
| 7 | +| [Intel® Arc™ Pro B-Series Graphics](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/arc/workstations/b-series/overview.html) | |
| 8 | + |
| 9 | +## Supported Models |
| 10 | + |
| 11 | +### Text-only Language Models |
| 12 | + |
| 13 | +| Model | Architecture | FP16 | Dynamic FP8 | MXFP4 | |
| 14 | +| ----------------------------------------- | ---------------------------------------------------- | ---- | ----------- | ----- | |
| 15 | +| openai/gpt-oss-20b | GPTForCausalLM | | | ✅ | |
| 16 | +| openai/gpt-oss-120b | GPTForCausalLM | | | ✅ | |
| 17 | +| deepseek-ai/DeepSeek-R1-Distill-Llama-8B | LlamaForCausalLM | ✅ | ✅ | | |
| 18 | +| deepseek-ai/DeepSeek-R1-Distill-Qwen-14B | QwenForCausalLM | ✅ | ✅ | | |
| 19 | +| deepseek-ai/DeepSeek-R1-Distill-Qwen-32B | QwenForCausalLM | ✅ | ✅ | | |
| 20 | +| deepseek-ai/DeepSeek-R1-Distill-Llama-70B | LlamaForCausalLM | ✅ | ✅ | | |
| 21 | +| Qwen/Qwen2.5-72B-Instruct | Qwen2ForCausalLM | ✅ | ✅ | | |
| 22 | +| Qwen/Qwen3-14B | Qwen3ForCausalLM | ✅ | ✅ | | |
| 23 | +| Qwen/Qwen3-32B | Qwen3ForCausalLM | ✅ | ✅ | | |
| 24 | +| Qwen/Qwen3-30B-A3B | Qwen3ForCausalLM | ✅ | ✅ | | |
| 25 | +| Qwen/Qwen3-30B-A3B-GPTQ-Int4 | Qwen3ForCausalLM | ✅ | ✅ | | |
| 26 | +| Qwen/Qwen3-coder-30B-A3B-Instruct | Qwen3ForCausalLM | ✅ | ✅ | | |
| 27 | +| Qwen/QwQ-32B | QwenForCausalLM | ✅ | ✅ | | |
| 28 | +| deepseek-ai/DeepSeek-V2-Lite | DeepSeekForCausalLM | ✅ | ✅ | | |
| 29 | +| meta-llama/Llama-3.1-8B-Instruct | LlamaForCausalLM | ✅ | ✅ | | |
| 30 | +| baichuan-inc/Baichuan2-13B-Chat | BaichuanForCausalLM | ✅ | ✅ | | |
| 31 | +| THUDM/GLM-4-9B-chat | GLMForCausalLM | ✅ | ✅ | | |
| 32 | +| THUDM/CodeGeex4-All-9B | CodeGeexForCausalLM | ✅ | ✅ | | |
| 33 | +| chuhac/TeleChat2-35B | LlamaForCausalLM (TeleChat2 based on Llama arch) | ✅ | ✅ | | |
| 34 | +| 01-ai/Yi1.5-34B-Chat | YiForCausalLM | ✅ | ✅ | | |
| 35 | +| THUDM/CodeGeex4-All-9B | CodeGeexForCausalLM | ✅ | ✅ | | |
| 36 | +| deepseek-ai/DeepSeek-Coder-33B-base | DeepSeekCoderForCausalLM | ✅ | ✅ | | |
| 37 | +| baichuan-inc/Baichuan2-13B-Chat | BaichuanForCausalLM | ✅ | ✅ | | |
| 38 | +| meta-llama/Llama-2-13b-chat-hf | LlamaForCausalLM | ✅ | ✅ | | |
| 39 | +| THUDM/CodeGeex4-All-9B | CodeGeexForCausalLM | ✅ | ✅ | | |
| 40 | +| Qwen/Qwen1.5-14B-Chat | QwenForCausalLM | ✅ | ✅ | | |
| 41 | +| Qwen/Qwen1.5-32B-Chat | QwenForCausalLM | ✅ | ✅ | | |
| 42 | + |
| 43 | +### Multimodal Language Models |
| 44 | + |
| 45 | +| Model | Architecture | FP16 | Dynamic FP8 | MXFP4 | |
| 46 | +| ---------------------------- | -------------------------------- | ---- | ----------- | ----- | |
| 47 | +| OpenGVLab/InternVL3_5-8B | InternVLForConditionalGeneration | ✅ | ✅ | | |
| 48 | +| OpenGVLab/InternVL3_5-14B | InternVLForConditionalGeneration | ✅ | ✅ | | |
| 49 | +| OpenGVLab/InternVL3_5-38B | InternVLForConditionalGeneration | ✅ | ✅ | | |
| 50 | +| Qwen/Qwen2-VL-7B-Instruct | Qwen2VLForConditionalGeneration | ✅ | ✅ | | |
| 51 | +| Qwen/Qwen2.5-VL-72B-Instruct | Qwen2VLForConditionalGeneration | ✅ | ✅ | | |
| 52 | +| Qwen/Qwen2.5-VL-32B-Instruct | Qwen2VLForConditionalGeneration | ✅ | ✅ | | |
| 53 | +| THUDM/GLM-4v-9B | GLM4vForConditionalGeneration | ✅ | ✅ | | |
| 54 | +| openbmb/MiniCPM-V-4 | MiniCPMVForConditionalGeneration | ✅ | ✅ | | |
| 55 | + |
| 56 | +### Embedding and Reranker Language Models |
| 57 | + |
| 58 | +| Model | Architecture | FP16 | Dynamic FP8 | MXFP4 | |
| 59 | +| ----------------------- | ------------------------------ | ---- | ----------- | ----- | |
| 60 | +| Qwen/Qwen3-Embedding-8B | Qwen3ForTextEmbedding | ✅ | ✅ | | |
| 61 | +| Qwen/Qwen3-Reranker-8B | Qwen3ForSequenceClassification | ✅ | ✅ | | |
| 62 | + |
| 63 | +✅ Runs and optimized. |
| 64 | +🟨 Runs and correct but not optimized to green yet. |
| 65 | +❌ Does not pass accuracy test or does not run. |
0 commit comments