Please enhance the model information to provide some better overview for localLLM users. The example below illustrates some values ( dont think its accurate) | model name | type | length | VRAM (FP16) | VRAM (FP32) | Download | |---|---|---|---|---|---| | Qwen3-Coder-Next | instruct | 256k | ~144 GB | ~288 GB | 🤗 [Hugging Face](https://huggingface.co/Qwen/Qwen3-Coder-Next) • 🤖 [ModelScope](https://modelscope.cn/models/Qwen/Qwen3-Coder-Next) | | Qwen3-Coder-Next-Base | base | 256k | ~144 GB | ~288 GB | 🤗 [Hugging Face](https://huggingface.co/Qwen/Qwen3-Coder-Next-Base) • 🤖 [ModelScope](https://modelscope.cn/models/Qwen/Qwen3-Coder-Next-Base) | | Qwen3-Coder-480B-A35B-Instruct | instruct | 256k | ~70 GB | ~140 GB | 🤗 [Hugging Face](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct) • 🤖 [ModelScope](https://modelscope.cn/models/Qwen/Qwen3-Coder-480B-A35B-Instruct) | | Qwen3-Coder-30B-A3B-Instruct | instruct | 256k | ~6 GB | ~12 GB | 🤗 [Hugging Face](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct) • 🤖 [ModelScope](https://modelscope.cn/models/Qwen/Qwen3-Coder-30B-A3B-Instruct) | | Qwen3-Coder-Next-FP8 | instruct | 256k | ~144 GB | ~288 GB | 🤗 [Hugging Face](https://huggingface.co/Qwen/Qwen3-Coder-Next-FP8) • 🤖 [ModelScope](https://modelscope.cn/models/Qwen/Qwen3-Coder-Next-FP8) | | Qwen3-Coder-Next-GGUF | instruct | 256k | ~144 GB | ~288 GB | 🤗 [Hugging Face](https://huggingface.co/Qwen/Qwen3-Coder-Next-GGUF) • 🤖 [ModelScope](https://modelscope.cn/models/Qwen/Qwen3-Coder-Next-GGUF) | | Qwen3-Coder-480B-A35B-Instruct-FP8 | instruct | 256k | ~70 GB | ~140 GB | 🤗 [Hugging Face](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8) • 🤖 [ModelScope](https://modelscope.cn/models/Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8) | | Qwen3-Coder-30B-A3B-Instruct-FP8 | instruct | 256k | ~6 GB | ~12 GB | 🤗 [Hugging Face](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8) • 🤖 [ModelScope](https://modelscope.cn/models/Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8) |
Please enhance the model information to provide some better overview for localLLM users.
The example below illustrates some values ( dont think its accurate)