feat(multimodal): add Kimi-K2.5 vision support for gRPC router#1026
feat(multimodal): add Kimi-K2.5 vision support for gRPC router#1026Kangyan-Zhou wants to merge 23 commits intolightseekorg:mainfrom
Conversation
Add ModelProcessorSpec and ImagePreProcessor for moonshotai/Kimi-K2.5 so the gRPC PD router can handle multimodal (image) requests. - KimiK25VisionSpec: matches "kimi" + "k2" model IDs, uses <|media_pad|> placeholder (media_placeholder_token_id from config), NaViT-style field layouts identical to Qwen-VL family - KimiK25Processor: wraps QwenVLProcessorBase with Kimi-specific defaults (patch_size=14, merge_size=2, normalization=[0.5,0.5,0.5], max_pixels=3,211,264 from in_patch_limit=16384) - Fix get_zmq_socket import path for sglang main compat Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…locally When the tokenizer source is a HuggingFace model ID (e.g., "moonshotai/Kimi-K2.5") rather than a local directory, the gRPC router cannot read config.json and preprocessor_config.json from disk. This causes multimodal requests to fail with "Failed to read config.json". Make get_or_load_config async and fall back to downloading the two config files from HF Hub via the new download_files_from_hf helper when the local path doesn't exist. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Address review findings: - Log errors from HF Hub downloads instead of silently swallowing them - Add explicit error when local model directory exists but config.json is missing (prevents misleading fallback to HF Hub) - Upgrade fallback log from debug to warn for production visibility Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
TiktokenTokenizer::encode() was using encode_ordinary() which ignores special tokens in the input text. This caused chat-template-rendered special tokens like <|media_pad|> to be split into BPE sub-tokens instead of being recognized as single token IDs. Switch to encode_with_special_tokens() unconditionally, matching HuggingFace tokenizer behavior where added special tokens are always recognized in input text. This fixes Kimi-K2.5 multimodal where the chat template inserts <|media_pad|> (ID 163605) but the tokenizer was producing sub-tokens that expand_tokens couldn't find. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Kimi-K2.5 engine accesses `item.grid_thws` (plural) on MultimodalDataItem, but the gateway was sending `image_grid_thw` (Qwen-VL convention). Rename the key in the processor output and update field_layouts/keep_on_cpu_keys to match. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove QwenVLProcessorBase dependency. Kimi-K2.5's MoonViT expects pixel_values as [N, C, patch_size, patch_size] (4D), not flattened [N, C*T*patch_size*patch_size] (2D) like Qwen-VL. The model's PatchEmbed3d applies Conv2d on each patch directly. Implement smart_resize and extract_patches independently, producing [total_patches, 3*14*14] = [N, 588] patches that the engine reconstructs as [N, 3, 14, 14] for Conv2d input. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The engine's PatchEmbed3d Conv2d expects 4D input [N, C, H, W] but the gateway was serializing pixel_values as 2D [N, C*patch_size*patch_size]. Store as ndarray::Array4 so the proto shape field is [N, 3, 14, 14], which the engine reconstructs correctly for Conv2d. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The previous smart_resize (from Qwen-VL) resized images directly to factor-aligned dimensions, stretching the content. The HF Kimi preprocessor instead: 1. Computes scale capped at 1.0 (never upscales) 2. Resizes with BICUBIC interpolation 3. Zero-pads to factor-aligned dimensions This mismatch caused degraded image quality — the model was trained with zero-padded images, not stretched ones. Rewrite to match the HF navit_resize_image + resize_image pipeline exactly. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
download_files_from_hf was silently failing in production (likely hf-hub crate issue). Switch to download_tokenizer_from_hf which already works for tokenizer loading and returns the HF cache directory containing config.json and preprocessor_config.json. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
download_tokenizer_from_hf only downloads tokenizer files (filtered by is_tokenizer_file), not config.json or preprocessor_config.json. Add a dedicated download_model_configs_from_hf that fetches these two files on the first multimodal request. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add detailed logging at key points: - Image dimensions, color type, raw bytes size after fetch - Pixel values shape, token counts, first/last pixels, min/max - Serialized pixel_values bytes and shape - Token expansion details (search_token_id, placeholders, offsets) Also use download_model_configs_from_hf and remove dead code. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
KimiK25Processor::preprocess() was reading mean/std from PreProcessorConfig which falls back to CLIP values when the config can't be parsed (Kimi's preprocessor_config.json nests values under media_proc_cfg). This caused images to be normalized with CLIP mean=[0.48,0.46,0.41] std=[0.27,0.26,0.28] instead of Kimi's mean=[0.5,0.5,0.5] std=[0.5,0.5,0.5], producing wrong pixel values that made the model misinterpret images entirely. Use self.default_mean()/default_std() which are hardcoded to the correct Kimi values. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Instead of hardcoding normalization values, parse the nested media_proc_cfg structure in Kimi's preprocessor_config.json to extract image_mean, image_std, patch_size, and merge_kernel_size. This ensures the correct values are used regardless of how the config is structured. The previous fix hardcoded [0.5,0.5,0.5] in the processor, which worked but would break if values changed. Now from_json() checks for media_proc_cfg when top-level fields are missing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Revert info-level diagnostic logging back to debug level now that the normalization root cause has been identified and fixed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Make from_value consistent with from_json by delegating to it, ensuring nested media_proc_cfg extraction applies to both paths - Add test for encode_with_special_tokens verifying special token strings in input produce single token IDs Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…tion Two optimizations for the Kimi-K2.5 image preprocessing pipeline: 1. Fuse resize + pad + normalize into a single pass using deinterleave_rgb_to_planes with precomputed scale/bias. Eliminates 2 intermediate Array3 allocations and 2 extra passes over pixel data. 2. Replace per-element scalar indexing in extract_patches with row-based extend_from_slice (14-element memcpy per row), enabling compiler auto-vectorization. Also take upstream multimodal.rs which has resolve_model_config_dir and updated image_processor_registry.find() API. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Log timing breakdown: image fetch, config load, preprocessing, token expansion, and assembly/serialization. This helps identify which step dominates TTFT for multimodal gRPC requests. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…inputs Two optimizations to reduce gRPC multimodal TTFT: 1. Move image preprocessing (resize + pad + normalize + patchify) to tokio::task::spawn_blocking so CPU-intensive work doesn't block the async runtime. Under 200 concurrent requests, this prevents serialized preprocessing from inflating tail latencies. 2. Strip mm_inputs from decode worker requests in PD dual dispatch. The decode worker only needs the KV cache from prefill — sending ~40MB of pixel tensors to it was pure waste. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace image::resize_exact(CatmullRom) with transforms::resize() which uses fast_image_resize (AVX2/SSE4 SIMD). This is a drop-in replacement that gives 3-5x faster BICUBIC resize — the dominant CPU cost in the preprocessing pipeline. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove info-level timing logs (fetch_ms, config_ms, preprocess_ms, expand_ms, serialize_ms) now that performance analysis is complete. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces support for the Kimi-K2.5 (MoonViT) model, implementing a specialized image processor that handles its specific resizing and zero-padding requirements. Key changes include updates to the model registry, preprocessor configuration parsing for nested formats, and tokenizer encoding to ensure special tokens are correctly recognized. Performance optimizations were also added to the model gateway, such as offloading image preprocessing to a blocking thread pool and stripping multimodal data from decode requests to reduce memory overhead. Review feedback focuses on memory efficiency, specifically suggesting the use of reference-counted pointers to avoid deep clones of image data and cautioning against large vector allocations during image processing.
| let registry = components.image_processor_registry.clone(); | ||
| let model_id_owned = model_id.to_string(); | ||
| let model_type_owned = model_type.map(String::from); | ||
| let image_clones: Vec<image::DynamicImage> = images.iter().map(|f| f.image.clone()).collect(); |
There was a problem hiding this comment.
Cloning all images into a Vec<DynamicImage> before spawning the blocking task creates a full copy of the image data in memory for every request. Since data passed to spawned background tasks must have a 'static lifetime, use reference-counted pointers like Arc to share the data efficiently instead of performing deep clones or attempting to pass references.
References
- Data passed to spawned background tasks must have a 'static lifetime. Use owned types or reference-counted pointers like Arc instead of passing references to ensure the data outlives the task.
| let scale: [f32; 3] = std::array::from_fn(|c| 1.0 / (255.0 * std[c] as f32)); | ||
| let bias: [f32; 3] = std::array::from_fn(|c| -(mean[c] as f32) / (std[c] as f32)); | ||
|
|
||
| let mut data = vec![0.0f32; 3 * canvas_pixels]; |
There was a problem hiding this comment.
The vector allocation vec![0.0f32; 3 * canvas_pixels] is potentially large. Given that canvas_pixels can be up to 512 * 512 (or more depending on input), this could lead to memory allocation failures or fragmentation. Consider using a pre-allocated buffer or a more memory-efficient approach if this is called frequently in a high-concurrency environment.
| let num_patches = grid_h * grid_w; | ||
| let patch_features = channels * patch_size * patch_size; | ||
|
|
||
| let mut patches = Vec::with_capacity(num_patches * patch_features); |
There was a problem hiding this comment.
| // Strip multimodal data from decode request — the decode worker only | ||
| // needs the KV cache from prefill, not the pixel tensors (~40MB saved). | ||
| let mut decode_request = proto_request; | ||
| decode_request.clear_mm_inputs(); |
- Remove dead download_model_configs_from_hf (replaced by upstream resolve_model_config_dir) - Extract in_patch_limit/patch_limit_on_one_side from media_proc_cfg into config.extra, read in from_preprocessor_config - Always check media_proc_cfg for all fields, not just when image_mean/std are missing (fixes partial config overlap) - Log warning when placeholder_token_id fails instead of silent .ok() - Add config_model_type fallback to KimiK25VisionSpec::matches - Add tests: 1x1 image, empty batch, from_preprocessor_config limits - Improve tiktoken encode comment explaining why encode_with_special_tokens is used Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Make Kimi-K2.5 code comments self-contained instead of comparing against Qwen-VL internals. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…arse_mm_inputs sglang v0.5.10 mm_utils.has_shm_features() accesses req.mm_inputs.mm_items via attribute, which fails when mm_inputs is a plain dict. Return a proper MultimodalInputs dataclass to fix the AttributeError crash on VLM requests in gRPC mode.
|
Closing to reopen with correct branch naming convention ( |
Summary
Add multimodal (image) support for moonshotai/Kimi-K2.5 in the gRPC PD router, matching the HTTP path's accuracy and improving TTFT at high concurrency.
<|media_pad|>placeholder,grid_thwsfield layout,media_placeholder_token_idfrom config[N, 3, 14, 14]patches for MoonViT's Conv2dmedia_proc_cfgfrom Kimi's non-standard preprocessor_config.jsonencode_with_special_tokensso chat template special tokens (e.g.,<|media_pad|>) are recognized as single token IDsdownload_model_configs_from_hffetches config.json + preprocessor_config.json when not available locallyValidation
Test plan
cargo test -p llm-multimodal -- kimi(17 tests)cargo test -p llm-tokenizer(103 tests including special token encoding)pre-commit run --all-filesclean🤖 Generated with Claude Code