feat(multimodal): add Kimi-K2.5 vision support for gRPC router#1044
feat(multimodal): add Kimi-K2.5 vision support for gRPC router#1044Kangyan-Zhou wants to merge 25 commits intolightseekorg:mainfrom
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: be79785a5b
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| // The input to encode() is always chat-template-rendered text containing | ||
| // special tokens that must be recognized. Raw user text is never encoded | ||
| // directly — it goes through the chat template first. | ||
| let tokens = self.tokenizer.encode_with_special_tokens(input); |
There was a problem hiding this comment.
Respect add_special_tokens in tiktoken encode path
encode() now always uses encode_with_special_tokens, so calls that explicitly pass add_special_tokens=false still treat raw text like <|...|> as control tokens. That breaks current raw-text paths (for example model_gateway/src/routers/grpc/regular/stages/generate/preparation.rs and /v1/tokenize in model_gateway/src/routers/tokenize/handlers.rs), where comments and behavior expect literal user text tokenization without special-token interpretation. In practice, prompts containing reserved token strings can be re-tokenized into control IDs and change generation semantics unexpectedly.
Useful? React with 👍 / 👎.
|
|
||
| for image in images { | ||
| let (w, h) = image.dimensions(); | ||
| let cfg = self.compute_resize_config(w as usize, h as usize); |
There was a problem hiding this comment.
Use preprocessor config for Kimi resize/token limits
preprocess() computes resize, patch counts, and num_img_tokens from self.compute_resize_config(...), which only reads struct fields initialized by KimiK25Processor::new(). Because this path never applies PreProcessorConfig limits (media_proc_cfg values like in_patch_limit / patch_limit_on_one_side), runtime behavior is locked to hardcoded defaults even though those fields are parsed. If a model config differs from defaults, placeholder expansion and patch layout will diverge from model expectations.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Code Review
This pull request adds support for the Kimi-K2.5 (MoonViT) model, including its specific vision specification and image processor. It updates the preprocessor configuration to handle nested JSON structures and optimizes the gateway by moving image preprocessing to a blocking thread pool and stripping multimodal data from decode requests. Review feedback suggests improving the efficiency of JSON parsing by utilizing existing deserialized data and moving image cloning operations into the blocking task to prevent blocking the async runtime.
| pub fn from_json(json: &str) -> Result<Self, serde_json::Error> { | ||
| serde_json::from_str(json) | ||
| let mut config: Self = serde_json::from_str(json)?; | ||
| // Extract values from nested media_proc_cfg (used by Kimi-K2.5 and similar models) | ||
| // when top-level fields are missing. | ||
| { | ||
| if let Ok(raw) = serde_json::from_str::<serde_json::Value>(json) { | ||
| if let Some(media_cfg) = raw.get("media_proc_cfg") { | ||
| if config.image_mean.is_none() { | ||
| config.image_mean = media_cfg | ||
| .get("image_mean") | ||
| .and_then(|v| serde_json::from_value(v.clone()).ok()); | ||
| } | ||
| if config.image_std.is_none() { | ||
| config.image_std = media_cfg | ||
| .get("image_std") | ||
| .and_then(|v| serde_json::from_value(v.clone()).ok()); | ||
| } | ||
| if config.patch_size.is_none() { | ||
| config.patch_size = media_cfg.get("patch_size").and_then(|v| { | ||
| v.as_u64().map(|ps| PatchSize { | ||
| height: Some(ps as u32), | ||
| width: Some(ps as u32), | ||
| }) | ||
| }); | ||
| } | ||
| if config.merge_size.is_none() { | ||
| config.merge_size = media_cfg | ||
| .get("merge_kernel_size") | ||
| .and_then(|v| v.as_u64()) | ||
| .map(|v| v as usize); | ||
| } | ||
| // Also extract Kimi-specific limits into the extra map | ||
| // so processors can read them via get_extra() | ||
| for key in ["in_patch_limit", "patch_limit_on_one_side"] { | ||
| if !config.extra.contains_key(key) { | ||
| if let Some(v) = media_cfg.get(key) { | ||
| config.extra.insert(key.to_string(), v.clone()); | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| Ok(config) | ||
| } | ||
|
|
||
| /// Parse from JSON value. | ||
| /// | ||
| /// Handles nested `media_proc_cfg` the same way as `from_json`. | ||
| pub fn from_value(value: serde_json::Value) -> Result<Self, serde_json::Error> { | ||
| serde_json::from_value(value) | ||
| let json_str = serde_json::to_string(&value)?; | ||
| Self::from_json(&json_str) | ||
| } |
There was a problem hiding this comment.
The current implementation of from_json and from_value is inefficient as it parses the JSON string multiple times and performs unnecessary stringification. Since PreProcessorConfig uses #[serde(flatten)] for the extra field, any non-standard fields like media_proc_cfg are already available in the extra map after the first deserialization. Additionally, avoid silently ignoring potential failures during field extraction; instead, log them as warnings to aid in debugging, as per repository guidelines.
pub fn from_json(json: &str) -> Result<Self, serde_json::Error> {
let mut config: Self = serde_json::from_str(json)?;
if let Some(media_cfg) = config.extra.get("media_proc_cfg").cloned() {
Self::apply_kimi_patches(&mut config, &media_cfg);
}
Ok(config)
}
pub fn from_value(value: serde_json::Value) -> Result<Self, serde_json::Error> {
let mut config: Self = serde_json::from_value(value)?;
if let Some(media_cfg) = config.extra.get("media_proc_cfg").cloned() {
Self::apply_kimi_patches(&mut config, &media_cfg);
}
Ok(config)
}
fn apply_kimi_patches(config: &mut Self, media_cfg: &serde_json::Value) {
if config.image_mean.is_none() {
config.image_mean = media_cfg.get("image_mean").and_then(|v| {
serde_json::from_value(v.clone())
.map_err(|e| log::warn!("Failed to parse image_mean from media_proc_cfg: {}", e))
.ok()
});
}
if config.image_std.is_none() {
config.image_std = media_cfg.get("image_std").and_then(|v| {
serde_json::from_value(v.clone())
.map_err(|e| log::warn!("Failed to parse image_std from media_proc_cfg: {}", e))
.ok()
});
}
if config.patch_size.is_none() {
config.patch_size = media_cfg.get("patch_size").and_then(|v| {
serde_json::from_value(v.clone())
.map_err(|e| log::warn!("Failed to parse patch_size from media_proc_cfg: {}", e))
.ok()
.flatten()
});
}
if config.merge_size.is_none() {
config.merge_size = media_cfg.get("merge_kernel_size").and_then(|v| v.as_u64()).map(|v| v as usize);
}
for key in ["in_patch_limit", "patch_limit_on_one_side"] {
if !config.extra.contains_key(key) {
if let Some(v) = media_cfg.get(key) {
config.extra.insert(key.to_string(), v.clone());
}
}
}
}References
- Instead of silently ignoring potential failures (e.g., from serialization), log them as warnings to aid in debugging. In Rust, prefer using unwrap_or_else to log an error over unwrap_or_default which would fail silently.
| let pp_config = model_config.preprocessor_config.clone(); | ||
| let registry = components.image_processor_registry.clone(); | ||
| let model_id_owned = model_id.to_string(); | ||
| let model_type_owned = model_type.map(String::from); | ||
| let image_clones: Vec<image::DynamicImage> = images.iter().map(|f| f.image.clone()).collect(); | ||
|
|
||
| let preprocessed: PreprocessedImages = tokio::task::spawn_blocking(move || { | ||
| let processor = registry | ||
| .find(&model_id_owned, model_type_owned.as_deref()) | ||
| .ok_or_else(|| { | ||
| anyhow::anyhow!("No image processor found for model: {model_id_owned}") | ||
| })?; | ||
| processor | ||
| .preprocess(&image_clones, &pp_config) | ||
| .map_err(|e| anyhow::anyhow!("Image preprocessing failed: {e}")) | ||
| }) | ||
| .await | ||
| .map_err(|e| anyhow::anyhow!("Preprocessing task panicked: {e}"))??; |
There was a problem hiding this comment.
Cloning DynamicImage is a CPU-intensive operation and should be moved into the spawn_blocking closure. Furthermore, ensure all data passed to the spawned task has a 'static lifetime by using owned types. For frequently cloned identifiers like model_id, consider using an interned string type like Arc<str> to improve performance by making clones cheap.
| let pp_config = model_config.preprocessor_config.clone(); | |
| let registry = components.image_processor_registry.clone(); | |
| let model_id_owned = model_id.to_string(); | |
| let model_type_owned = model_type.map(String::from); | |
| let image_clones: Vec<image::DynamicImage> = images.iter().map(|f| f.image.clone()).collect(); | |
| let preprocessed: PreprocessedImages = tokio::task::spawn_blocking(move || { | |
| let processor = registry | |
| .find(&model_id_owned, model_type_owned.as_deref()) | |
| .ok_or_else(|| { | |
| anyhow::anyhow!("No image processor found for model: {model_id_owned}") | |
| })?; | |
| processor | |
| .preprocess(&image_clones, &pp_config) | |
| .map_err(|e| anyhow::anyhow!("Image preprocessing failed: {e}")) | |
| }) | |
| .await | |
| .map_err(|e| anyhow::anyhow!("Preprocessing task panicked: {e}"))??; | |
| let pp_config = model_config.preprocessor_config.clone(); | |
| let registry = components.image_processor_registry.clone(); | |
| let model_id_owned = model_id.to_string(); | |
| let model_type_owned = model_type.map(String::from); | |
| let images_for_pp = images.clone(); | |
| let preprocessed: PreprocessedImages = tokio::task::spawn_blocking(move || { | |
| let image_clones: Vec<image::DynamicImage> = images_for_pp.iter().map(|f| f.image.clone()).collect(); | |
| let processor = registry | |
| .find(&model_id_owned, model_type_owned.as_deref()) | |
| .ok_or_else(|| { | |
| anyhow::anyhow!("No image processor found for model: {model_id_owned}") | |
| })?; | |
| processor | |
| .preprocess(&image_clones, &pp_config) | |
| .map_err(|e| anyhow::anyhow!("Image preprocessing failed: {e}")) | |
| }) | |
| .await | |
| .map_err(|e| anyhow::anyhow!("Preprocessing task panicked: {e}"))?? |
References
- Data passed to spawned background tasks must have a 'static lifetime. Use owned types or reference-counted pointers like Arc instead of passing references to ensure the data outlives the task.
- For types that are frequently cloned on hot paths and represent a small, repeated set of values (e.g., worker IDs or tenant IDs), use an interned string type like Arc to improve performance.
📝 WalkthroughWalkthroughAdds Kimi K2.5 vision support: new KimiK25VisionSpec and KimiK25Processor, registry and image-processor registrations, nested media config extraction, tokenizer special-token encoding, and gateway/proto updates to run preprocessing on a blocking thread and optionally clear multimodal inputs. (50 words) Changes
Sequence Diagram(s)sequenceDiagram
participant GW as gRPC Gateway
participant Reg as Vision Registry
participant Proc as KimiK25Processor
participant Spec as KimiK25VisionSpec
participant TK as Tokenizer
GW->>Reg: lookup processor by model_id / model_type
Reg-->>GW: KimiK25Processor
GW->>+Proc: spawn_blocking preprocess(images, config)
Proc->>Proc: read config (patch_size, merge_size, limits)
Proc->>Proc: resize → pad → normalize (fused)
Proc->>Proc: extract patches, compute grid_thws & num_img_tokens
Proc-->>-GW: PreprocessedImages {pixel_values, num_img_tokens, metadata}
GW->>Spec: match model_id / config.model_type
Spec-->>GW: require media_placeholder_token_id / field layouts
GW->>Spec: resolve media_placeholder_token_id(config)
Spec-->>GW: token_id or MissingConfigField
GW->>TK: encode placeholder token (fallback)
TK-->>GW: token_id
GW->>Spec: generate_prompt_replacements(preprocessed)
Spec-->>GW: PromptReplacements per image
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/multimodal/src/vision/preprocessor_config.rs`:
- Around line 253-259: The current logic that sets config.patch_size from
media_cfg only handles the integer form (v.as_u64) and ignores the object form
like {"height":14,"width":14}; update the extraction for config.patch_size (the
block that reads media_cfg.get("patch_size")) to also check for
v.as_object()/v.as_object().and_then(...) and map optional "height" and "width"
fields (as_u64 -> u32) into a PatchSize { height: Some(...), width: Some(...) }
fallbacking to the scalar behavior if the value is a single integer; preserve
existing None handling so defaults aren't silently used.
In `@crates/multimodal/src/vision/processors/kimi_k25.rs`:
- Around line 96-125: compute_resize_config currently computes scale using
unaligned patch counts (s1) which can be exceeded after aligning to factor and
applying pad_width/pad_height; update compute_resize_config (and ResizeConfig
output) to ensure the final aligned token count <= self.in_patch_limit: after
computing new_w/new_h, factor, pad_width, pad_height and deriving
token_width/token_height/num_tokens, if num_tokens > self.in_patch_limit then
reduce new_w and/or new_h (for example decrement by factor steps or compute a
tightened scale from in_patch_limit divided by aligned patch grid) and recompute
pad_width/pad_height/token dimensions until num_tokens <= self.in_patch_limit;
ensure you reference and update symbols new_w, new_h, pad_width, pad_height,
token_width, token_height, num_tokens and return the adjusted values in
ResizeConfig.
- Around line 67-78: from_preprocessor_config() correctly parses patch_size,
merge_size, in_patch_limit and patch_limit_on_one_side from PreProcessorConfig
but those per-request overrides are never applied in the live path; update
preprocess() and calculate_num_tokens() to compute and use an effective set of
processor settings by merging the call's media_proc_cfg/PreProcessorConfig (or
the media_proc_cfg passed through ImageProcessorRegistry) with the instance
defaults (self.patch_size, self.merge_size, self.in_patch_limit,
self.patch_limit_on_one_side) and then use those local effective values for
layout and placeholder calculations instead of always reading self.* so
per-request non-default Kimi config is respected.
- Around line 184-190: Replace the panicking
Array3::from_shape_vec(...).expect(...) calls with error-aware code that maps
the shape/vec construction failure into a TransformError return; e.g., call
Array3::from_shape_vec((3, canvas_h, canvas_w), data) and propagate its Err via
Result::map_err into an appropriate TransformError value (with an "INVARIANT:
data has exactly 3*canvas_h*canvas_w elements by construction" comment next to
the mapping), updating the enclosing function signature to return Result<_,
TransformError> if needed; apply the same pattern to the other expect() usages
around lines 210-216 so preprocessing shape/layout failures produce
Err(TransformError) rather than panicking.
In `@crates/tokenizer/src/tiktoken.rs`:
- Around line 439-450: The encode() implementation currently ignores the
add_special_tokens flag and always calls
self.tokenizer.encode_with_special_tokens(input); change it to respect the
caller intent by calling encode_with_special_tokens only when add_special_tokens
is true, and call the tokenizer method that preserves literal special-token
strings (e.g., encode or encode_without_special_tokens) when add_special_tokens
is false, returning Encoding::Tiktoken(...) as before; if Kimi needs placeholder
recognition independent of BOS/EOS, add an explicit flag/parameter instead of
repurposing add_special_tokens.
In `@model_gateway/src/routers/grpc/multimodal.rs`:
- Around line 434-453: The current code clones every image on the async runtime
before calling tokio::task::spawn_blocking (the images.iter().map(|f|
f.image.clone()).collect() line), which performs the heavy pixel copy off the
blocking pool too late; move the deep image cloning into the blocking closure so
the async task only moves lightweight handles into spawn_blocking. Specifically,
change the closure passed to tokio::task::spawn_blocking to capture/move the
original images (and pp_config, registry, model_id/model_type) and perform image
cloning and processor.preprocess(...) inside that closure (keeping the existing
error mapping), so the async runtime doesn't perform the O(total_pixels) copy
before spawn_blocking.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: f673a289-cf17-45dd-b8b3-7102970082ff
📒 Files selected for processing (11)
crates/multimodal/src/registry/kimi_k25.rscrates/multimodal/src/registry/mod.rscrates/multimodal/src/vision/image_processor.rscrates/multimodal/src/vision/preprocessor_config.rscrates/multimodal/src/vision/processors/kimi_k25.rscrates/multimodal/src/vision/processors/mod.rscrates/tokenizer/src/tiktoken.rsgrpc_servicer/smg_grpc_servicer/sglang/servicer.pymodel_gateway/src/routers/grpc/common/stages/request_execution.rsmodel_gateway/src/routers/grpc/multimodal.rsmodel_gateway/src/routers/grpc/proto_wrapper.rs
| if config.patch_size.is_none() { | ||
| config.patch_size = media_cfg.get("patch_size").and_then(|v| { | ||
| v.as_u64().map(|ps| PatchSize { | ||
| height: Some(ps as u32), | ||
| width: Some(ps as u32), | ||
| }) | ||
| }); |
There was a problem hiding this comment.
Nested media_proc_cfg.patch_size only supports integer form.
The new nested extraction ignores object-form patch size (e.g. {"height":14,"width":14}), even though this config type is supported elsewhere. That can silently fall back to defaults and break preprocessing dimensions.
🔧 Proposed fix
if config.patch_size.is_none() {
config.patch_size = media_cfg.get("patch_size").and_then(|v| {
- v.as_u64().map(|ps| PatchSize {
- height: Some(ps as u32),
- width: Some(ps as u32),
- })
+ v.as_u64()
+ .and_then(|ps| u32::try_from(ps).ok())
+ .map(|ps| PatchSize {
+ height: Some(ps),
+ width: Some(ps),
+ })
+ .or_else(|| {
+ let h = v.get("height")?.as_u64().and_then(|x| u32::try_from(x).ok())?;
+ let w = v
+ .get("width")
+ .and_then(|x| x.as_u64())
+ .and_then(|x| u32::try_from(x).ok())
+ .unwrap_or(h);
+ Some(PatchSize {
+ height: Some(h),
+ width: Some(w),
+ })
+ })
});
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if config.patch_size.is_none() { | |
| config.patch_size = media_cfg.get("patch_size").and_then(|v| { | |
| v.as_u64().map(|ps| PatchSize { | |
| height: Some(ps as u32), | |
| width: Some(ps as u32), | |
| }) | |
| }); | |
| if config.patch_size.is_none() { | |
| config.patch_size = media_cfg.get("patch_size").and_then(|v| { | |
| v.as_u64() | |
| .and_then(|ps| u32::try_from(ps).ok()) | |
| .map(|ps| PatchSize { | |
| height: Some(ps), | |
| width: Some(ps), | |
| }) | |
| .or_else(|| { | |
| let h = v.get("height")?.as_u64().and_then(|x| u32::try_from(x).ok())?; | |
| let w = v | |
| .get("width") | |
| .and_then(|x| x.as_u64()) | |
| .and_then(|x| u32::try_from(x).ok()) | |
| .unwrap_or(h); | |
| Some(PatchSize { | |
| height: Some(h), | |
| width: Some(w), | |
| }) | |
| }) | |
| }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/multimodal/src/vision/preprocessor_config.rs` around lines 253 - 259,
The current logic that sets config.patch_size from media_cfg only handles the
integer form (v.as_u64) and ignores the object form like
{"height":14,"width":14}; update the extraction for config.patch_size (the block
that reads media_cfg.get("patch_size")) to also check for
v.as_object()/v.as_object().and_then(...) and map optional "height" and "width"
fields (as_u64 -> u32) into a PatchSize { height: Some(...), width: Some(...) }
fallbacking to the scalar behavior if the value is a single integer; preserve
existing None handling so defaults aren't silently used.
| pub fn from_preprocessor_config(config: &PreProcessorConfig) -> Self { | ||
| Self { | ||
| patch_size: config.get_patch_size(DEFAULT_PATCH_SIZE), | ||
| merge_size: config.merge_size.unwrap_or(DEFAULT_MERGE_SIZE), | ||
| in_patch_limit: config | ||
| .get_extra::<usize>("in_patch_limit") | ||
| .unwrap_or(DEFAULT_IN_PATCH_LIMIT), | ||
| patch_limit_on_one_side: config | ||
| .get_extra::<usize>("patch_limit_on_one_side") | ||
| .unwrap_or(DEFAULT_PATCH_LIMIT_ON_ONE_SIDE), | ||
| } | ||
| } |
There was a problem hiding this comment.
media_proc_cfg overrides are parsed but never applied in the live path.
from_preprocessor_config() reads patch_size, merge_size, and the Kimi patch limits, but preprocess() / calculate_num_tokens() still drive all geometry off the registry instance’s fixed self.* fields. Since the router reuses a shared processor from ImageProcessorRegistry, any non-default Kimi config silently produces the wrong patch layout and placeholder counts.
💡 Apply the effective processor settings per call
fn preprocess(
&self,
images: &[DynamicImage],
config: &PreProcessorConfig,
) -> Result<PreprocessedImages, TransformError> {
+ let runtime = Self::from_preprocessor_config(config);
+
if images.is_empty() {
return Err(TransformError::EmptyBatch);
}
@@
for image in images {
let (w, h) = image.dimensions();
- let cfg = self.compute_resize_config(w as usize, h as usize);
+ let cfg = runtime.compute_resize_config(w as usize, h as usize);
@@
- let grid_h = padded_h / self.patch_size;
- let grid_w = padded_w / self.patch_size;
+ let grid_h = padded_h / runtime.patch_size;
+ let grid_w = padded_w / runtime.patch_size;
@@
- let patches = Self::extract_patches(&tensor, self.patch_size);
+ let patches = Self::extract_patches(&tensor, runtime.patch_size);
@@
let pixel_values = ndarray::Array4::from_shape_vec(
- (total_patches, 3, self.patch_size, self.patch_size),
+ (total_patches, 3, runtime.patch_size, runtime.patch_size),
all_patches,
) fn calculate_num_tokens(&self, width: u32, height: u32, config: &PreProcessorConfig) -> usize {
- self.compute_resize_config(width as usize, height as usize)
- .num_tokens
+ Self::from_preprocessor_config(config)
+ .compute_resize_config(width as usize, height as usize)
+ .num_tokens
}Also applies to: 245-325
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/multimodal/src/vision/processors/kimi_k25.rs` around lines 67 - 78,
from_preprocessor_config() correctly parses patch_size, merge_size,
in_patch_limit and patch_limit_on_one_side from PreProcessorConfig but those
per-request overrides are never applied in the live path; update preprocess()
and calculate_num_tokens() to compute and use an effective set of processor
settings by merging the call's media_proc_cfg/PreProcessorConfig (or the
media_proc_cfg passed through ImageProcessorRegistry) with the instance defaults
(self.patch_size, self.merge_size, self.in_patch_limit,
self.patch_limit_on_one_side) and then use those local effective values for
layout and placeholder calculations instead of always reading self.* so
per-request non-default Kimi config is respected.
| fn compute_resize_config(&self, width: usize, height: usize) -> ResizeConfig { | ||
| let ps = self.patch_size; | ||
| let patches_w = (width / ps).max(1) as f64; | ||
| let patches_h = (height / ps).max(1) as f64; | ||
|
|
||
| let s1 = (self.in_patch_limit as f64 / (patches_w * patches_h)).sqrt(); | ||
| let s2 = (self.patch_limit_on_one_side * ps) as f64 / width as f64; | ||
| let s3 = (self.patch_limit_on_one_side * ps) as f64 / height as f64; | ||
| let scale = f64::min(1.0, f64::min(s1, f64::min(s2, s3))); | ||
|
|
||
| let new_w = ((width as f64 * scale) as usize).max(1); | ||
| let new_h = ((height as f64 * scale) as usize).max(1); | ||
| let new_w = new_w.min(self.patch_limit_on_one_side * ps); | ||
| let new_h = new_h.min(self.patch_limit_on_one_side * ps); | ||
|
|
||
| let factor = self.factor(); | ||
| let pad_width = (factor - new_w % factor) % factor; | ||
| let pad_height = (factor - new_h % factor) % factor; | ||
|
|
||
| let token_height = (new_h + pad_height) / factor; | ||
| let token_width = (new_w + pad_width) / factor; | ||
| let num_tokens = token_height * token_width; | ||
|
|
||
| ResizeConfig { | ||
| new_width: new_w, | ||
| new_height: new_h, | ||
| pad_width, | ||
| pad_height, | ||
| num_tokens, | ||
| } |
There was a problem hiding this comment.
Alignment padding can exceed in_patch_limit.
s1 is computed from floor(width / patch_size) * floor(height / patch_size), but the real patch count is derived after both axes are rounded up to the 28-pixel alignment. Near the cap this overshoots: with the defaults, a 1792x1806 image scales to 1785x1798 and then pads to 1792x1820, i.e. 128x130 = 16,640 patches, which is above the configured 16,384 limit.
Please base the limit check on the aligned dimensions, or add a post-adjust step before returning ResizeConfig.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/multimodal/src/vision/processors/kimi_k25.rs` around lines 96 - 125,
compute_resize_config currently computes scale using unaligned patch counts (s1)
which can be exceeded after aligning to factor and applying
pad_width/pad_height; update compute_resize_config (and ResizeConfig output) to
ensure the final aligned token count <= self.in_patch_limit: after computing
new_w/new_h, factor, pad_width, pad_height and deriving
token_width/token_height/num_tokens, if num_tokens > self.in_patch_limit then
reduce new_w and/or new_h (for example decrement by factor steps or compute a
tightened scale from in_patch_limit divided by aligned patch grid) and recompute
pad_width/pad_height/token dimensions until num_tokens <= self.in_patch_limit;
ensure you reference and update symbols new_w, new_h, pad_width, pad_height,
token_width, token_height, num_tokens and return the adjusted values in
ResizeConfig.
| #[expect( | ||
| clippy::expect_used, | ||
| reason = "data has exactly 3*canvas_h*canvas_w elements by construction" | ||
| )] | ||
| Array3::from_shape_vec((3, canvas_h, canvas_w), data) | ||
| .expect("shape matches pre-allocated buffer") | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Return TransformError here instead of panicking.
Both expect() calls turn preprocessing invariants into panics in production request handling. Threading Result through these helpers would keep unexpected shape/layout regressions recoverable, and you can still document the assumption with an INVARIANT: comment next to the error mapping. Based on learnings: multimodal vision processors in this repo should avoid expect/unreachable in production code and use INVARIANT: to document safe-code assumptions.
Also applies to: 210-216
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/multimodal/src/vision/processors/kimi_k25.rs` around lines 184 - 190,
Replace the panicking Array3::from_shape_vec(...).expect(...) calls with
error-aware code that maps the shape/vec construction failure into a
TransformError return; e.g., call Array3::from_shape_vec((3, canvas_h,
canvas_w), data) and propagate its Err via Result::map_err into an appropriate
TransformError value (with an "INVARIANT: data has exactly 3*canvas_h*canvas_w
elements by construction" comment next to the mapping), updating the enclosing
function signature to return Result<_, TransformError> if needed; apply the same
pattern to the other expect() usages around lines 210-216 so preprocessing
shape/layout failures produce Err(TransformError) rather than panicking.
| fn encode(&self, input: &str, _add_special_tokens: bool) -> Result<Encoding> { | ||
| let tokens = self.tokenizer.encode_ordinary(input); | ||
| // Always use encode_with_special_tokens so that special token strings | ||
| // in the input (e.g., <|media_pad|> from chat templates) are recognized | ||
| // as single tokens rather than split into BPE sub-tokens. | ||
| // | ||
| // In the gateway, tiktoken tokenizers are only created via | ||
| // from_dir_with_chat_template (hub-loaded models like Kimi, DeepSeek). | ||
| // The input to encode() is always chat-template-rendered text containing | ||
| // special tokens that must be recognized. Raw user text is never encoded | ||
| // directly — it goes through the chat template first. | ||
| let tokens = self.tokenizer.encode_with_special_tokens(input); | ||
| Ok(Encoding::Tiktoken(tokens)) |
There was a problem hiding this comment.
encode() now ignores caller intent and silently changes tokenization semantics.
add_special_tokens is ignored, but multiple call sites pass false expecting literal special-token strings to not be interpreted as control tokens. This change can alter token counts and behavior for raw user text paths.
🔧 Proposed fix
- fn encode(&self, input: &str, _add_special_tokens: bool) -> Result<Encoding> {
+ fn encode(&self, input: &str, add_special_tokens: bool) -> Result<Encoding> {
@@
- let tokens = self.tokenizer.encode_with_special_tokens(input);
+ let tokens = if add_special_tokens {
+ self.tokenizer.encode_with_special_tokens(input)
+ } else {
+ self.tokenizer.encode_ordinary(input)
+ };
Ok(Encoding::Tiktoken(tokens))
}- let encoding = tokenizer.encode("hello<|endoftext|>world", false).unwrap();
+ let encoding = tokenizer.encode("hello<|endoftext|>world", true).unwrap();If Kimi needs placeholder recognition independently of BOS/EOS behavior, add an explicit API/flag for that instead of repurposing add_special_tokens.
Also applies to: 812-825
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/tokenizer/src/tiktoken.rs` around lines 439 - 450, The encode()
implementation currently ignores the add_special_tokens flag and always calls
self.tokenizer.encode_with_special_tokens(input); change it to respect the
caller intent by calling encode_with_special_tokens only when add_special_tokens
is true, and call the tokenizer method that preserves literal special-token
strings (e.g., encode or encode_without_special_tokens) when add_special_tokens
is false, returning Encoding::Tiktoken(...) as before; if Kimi needs placeholder
recognition independent of BOS/EOS, add an explicit flag/parameter instead of
repurposing add_special_tokens.
There was a problem hiding this comment.
I think it would be better for us to respect the parameter add_special_tokens
There was a problem hiding this comment.
@CatherineSue, agreed! Here's the concrete fix — branch on the flag and update the corresponding test:
impl Encoder for TiktokenTokenizer {
fn encode(&self, input: &str, add_special_tokens: bool) -> Result<Encoding> {
let tokens = if add_special_tokens {
self.tokenizer.encode_with_special_tokens(input)
} else {
self.tokenizer.encode_ordinary(input)
};
Ok(Encoding::Tiktoken(tokens))
}
// ...
}And update the test to pass true since it's explicitly testing special-token recognition:
#[test]
fn test_encode_recognizes_special_tokens_in_input() {
let tokenizer = TiktokenTokenizer::new(TiktokenModel::Cl100kBase).unwrap();
// Pass add_special_tokens=true to recognize <|endoftext|> as a single token
let encoding = tokenizer.encode("hello<|endoftext|>world", true).unwrap();
let ids = encoding.token_ids();
assert!(
ids.contains(&100257),
"Special token <|endoftext|> should be recognized as single token, got: {ids:?}"
);
}The comment block inside encode() (lines 441–449) that explains the "always use encode_with_special_tokens" rationale should also be removed or replaced with a brief note, since it no longer reflects the behavior.
The call sites in the gateway that render chat-template output (which contains <|media_pad|>) should pass add_special_tokens: true to preserve single-token encoding of those placeholders.
🧠 Learnings used
Learnt from: CatherineSue
Repo: lightseekorg/smg PR: 914
File: crates/tokenizer/src/huggingface.rs:187-199
Timestamp: 2026-03-26T04:04:53.953Z
Learning: In `crates/tokenizer/src/huggingface.rs` (lightseekorg/smg), the `extract_special_tokens` / `load_chat_template_and_config` path intentionally makes no distinction between a missing key and an explicit `null` value for special tokens (bos_token, eos_token, unk_token, pad_token) in `tokenizer_config.json`. Both cases produce `None` and fall back to vocab pattern matching. This is considered an acceptable trade-off because explicit `null` special tokens in real HuggingFace configs are extremely rare. Do not flag the absence of `Some(None)` / triple-Option handling for this codepath.
Learnt from: CatherineSue
Repo: lightseekorg/smg PR: 990
File: crates/tokenizer/benches/stop_decoder.rs:23-24
Timestamp: 2026-04-01T08:02:32.590Z
Learning: In `lightseekorg/smg` (`crates/tokenizer/benches/stop_decoder.rs`), benchmarks intentionally use `MockTokenizer` (not a real HuggingFace tokenizer) to keep them hermetic and avoid network/filesystem dependencies for model downloads. The HuggingFace native `step_decode_stream` fast-path is covered by the integration test `test_sequence_operations` in `crates/tokenizer/tests/tokenizer_integration.rs`, which uses a real HF tokenizer. A fixture-backed HF benchmark is a known follow-up item but is not required in this PR. Do not flag the absence of HF tokenizer usage in the bench file as a gap.
Learnt from: CatherineSue
Repo: lightseekorg/smg PR: 497
File: model_gateway/src/routers/grpc/regular/stages/chat/request_building.rs:108-113
Timestamp: 2026-02-21T23:56:04.191Z
Learning: In repo lightseekorg/smg, file model_gateway/src/routers/grpc/regular/stages/chat/request_building.rs: When fetching tokenizer_source via ctx.components.tokenizer_registry.get_by_name(model_id).map(|e| e.source).unwrap_or_default(), an empty string is an intentional valid fallback when the model isn't in the registry. This allows config loading to proceed with the default path. Comments documenting this fallback are not necessary to keep noise down.
Learnt from: CatherineSue
Repo: lightseekorg/smg PR: 570
File: model_gateway/src/routers/grpc/regular/stages/chat/preparation.rs:87-97
Timestamp: 2026-03-01T05:57:56.940Z
Learning: In model_gateway/src/routers/grpc/regular/stages/chat/preparation.rs, when multimodal content is detected, an empty tokenizer_source must be rejected with a bad_request (multimodal_config_missing) before calling process_multimodal(). Without a valid tokenizer source, get_or_load_config() fails with a confusing file-not-found error downstream. The early guard provides a clear error message. This differs from request_building.rs, where empty tokenizer_source is a valid fallback for non-multimodal config loading.
Learnt from: CatherineSue
Repo: lightseekorg/smg PR: 990
File: crates/tokenizer/src/sequence.rs:130-135
Timestamp: 2026-03-31T18:18:07.958Z
Learning: In `lightseekorg/smg` (`crates/tokenizer/src/sequence.rs`), `Sequence` is only used inside `StopSequenceDecoder`, which exclusively calls `append_token()`. The `append_text()` method is only exercised in unit tests (e.g. `test_sequence_append_text`) that do not subsequently call `append_token()`. No production code mixes the two paths. `append_text()` does not invalidate `cached_prefix`/`prefix_index`, so mixing `append_text()` + `append_token()` on the same instance would produce stale incremental diffs — but this is not a current production concern.
Learnt from: slin1237
Repo: lightseekorg/smg PR: 439
File: tokenizer/src/chat_template.rs:528-555
Timestamp: 2026-02-17T03:09:48.695Z
Learning: In tokenizer/src/chat_template.rs, the load_chat_template_from_file function intentionally uses trim() on .jinja file content to match Python transformers behavior. HuggingFace .jinja template files commonly have trailing newlines from editors/git that are file-level artifacts (not intentional prompt whitespace), and trim() removes these to prevent breaking template rendering.
Learnt from: slin1237
Repo: lightseekorg/smg PR: 439
File: tokenizer/src/cache/mod.rs:274-285
Timestamp: 2026-02-17T02:57:28.834Z
Learning: In the tokenizer crate, CachedTokenizer wraps Arc<dyn Tokenizer> and intentionally does not delegate set_chat_template because Arc doesn't allow mutation through a shared reference without interior mutability (e.g., Mutex/RwLock), which would be over-engineering for a read-through cache. The default trait no-op is appropriate.
Learnt from: key4ng
Repo: lightseekorg/smg PR: 1006
File: crates/tool_parser/src/parsers/deepseek31.rs:162-181
Timestamp: 2026-04-01T04:14:46.469Z
Learning: In repo lightseekorg/smg, `crates/tool_parser/src/parsers/deepseek31.rs` (and the analogous V3 parser `crates/tool_parser/src/parsers/deepseek.rs` lines 186-197): The `parse_incremental` method does NOT split off a plain-text prefix before the first tool marker within the same chunk. This is intentional because the gRPC streaming path delivers tokens individually, so normal text content and tool-call markers always arrive in separate chunks — a prefix and a tool marker will never coexist in the same chunk. Do not flag the absence of a within-chunk prefix-split as a bug; the `test_deepseek31_streaming_text_before_tools` test covers the realistic multi-chunk case.
Learnt from: CatherineSue
Repo: lightseekorg/smg PR: 495
File: grpc_client/proto/sglang_scheduler.proto:140-172
Timestamp: 2026-02-21T11:57:36.560Z
Learning: In repo lightseekorg/smg, the multimodal proto fields in `grpc_client/proto/sglang_scheduler.proto` (including `MultimodalInputs.pixel_values`, `model_specific_tensors`, `im_token_id`, and `mm_placeholders`) were introduced in PR `#495` as the first real usage; prior internal-only development fields like `processed_features` were never shipped to production, so wire-compatibility concerns do not apply to these field number assignments.
Learnt from: key4ng
Repo: lightseekorg/smg PR: 1034
File: model_gateway/src/routers/grpc/multimodal.rs:602-610
Timestamp: 2026-04-04T00:29:50.135Z
Learning: In repo lightseekorg/smg, the gRPC regular pipeline runs stages in order: Preparation (tokenization + multimodal media fetch + preprocessing) → Worker Selection → Request Building. Because backend identity (e.g., GrpcClient::Mlx) is only resolved after worker selection, it is architecturally impossible to short-circuit MLX multimodal rejection before process_multimodal() / process_multimodal_messages() runs in the Preparation stage. The earliest feasible rejection point for backend-specific multimodal unsupported errors is at assemble_multimodal_data in request_building (after preprocessing but before generation). Moving rejection earlier would require a larger architectural change (e.g., moving worker selection before preparation). Do not flag assemble_multimodal_data's GrpcClient::Mlx error arm as "too late" in the pipeline.
Learnt from: CatherineSue
Repo: lightseekorg/smg PR: 495
File: model_gateway/src/routers/grpc/multimodal.rs:153-166
Timestamp: 2026-02-21T18:39:37.571Z
Learning: In repo lightseekorg/smg, the gRPC multimodal pipeline in model_gateway/src/routers/grpc/multimodal.rs intentionally skips ContentPart::VideoUrl silently (without warning) because video support is not yet implemented; adding warnings would be noisy for legitimate requests containing video parts that aren't processed yet. Proper video handling will be added when the feature is implemented.
Learnt from: CatherineSue
Repo: lightseekorg/smg PR: 638
File: grpc_servicer/smg_grpc_servicer/vllm/servicer.py:371-406
Timestamp: 2026-03-05T04:48:49.033Z
Learning: In repo lightseekorg/smg, `grpc_servicer/smg_grpc_servicer/vllm/servicer.py` (`_build_preprocessed_mm_inputs`): multimodal metadata fields (`flat_keys`, `batched_keys`, `mm_placeholders`, `model_specific_tensors`) are produced exclusively by the internal Rust router and are treated as trusted data. Defensive input-validation guards (e.g., checking flat_keys targets exist in hf_dict, validating PlaceholderRange bounds) are intentionally omitted to avoid hot-path overhead. Do not flag missing validation on these fields in future reviews.
Learnt from: CatherineSue
Repo: lightseekorg/smg PR: 495
File: model_gateway/src/routers/grpc/regular/stages/chat/preparation.rs:121-131
Timestamp: 2026-02-21T18:45:58.696Z
Learning: In repo lightseekorg/smg, all multimodal processing failures in model_gateway/src/routers/grpc/regular/stages/chat/preparation.rs currently return 400 Bad Request for simplicity, because the underlying `anyhow::Error` from the multimodal crate doesn't distinguish error types (client vs upstream failures). Error categorization (e.g., mapping upstream fetch failures to 502 Bad Gateway) is deferred to a follow-up when more failure modes need differentiation.
Learnt from: CatherineSue
Repo: lightseekorg/smg PR: 588
File: model_gateway/src/routers/grpc/multimodal.rs:453-514
Timestamp: 2026-03-03T18:03:45.713Z
Learning: In repo lightseekorg/smg, backend assembly functions in model_gateway/src/routers/grpc/multimodal.rs (e.g., assemble_sglang, assemble_vllm, assemble_trtllm) are tested via E2E tests rather than unit tests, as unit tests for these functions are not considered worthwhile.
Learnt from: XinyueZhang369
Repo: lightseekorg/smg PR: 399
File: protocols/src/interactions.rs:505-509
Timestamp: 2026-02-19T03:08:50.192Z
Learning: In code reviews for Rust projects using the validator crate (v0.20.0), ensure that custom validation functions for numeric primitive types (e.g., f32, i32, u32, i16, etc.) accept the value by value, not by reference. Example: fn validate(value: f32) { ... }. The validator derive macro has a hardcoded list of numeric types that are passed by value, while all other types are passed by reference. Apply this guideline whenever validating numeric fields to align with the derive macro behavior.
Learnt from: slin1237
Repo: lightseekorg/smg PR: 489
File: model_gateway/src/core/token_bucket.rs:58-63
Timestamp: 2026-02-21T02:30:51.443Z
Learning: For lint-only/Clippy enforcement PRs in this repository, avoid introducing behavioral changes (e.g., new input validation or logic changes). Treat such PRs as non-functional changes and plan a separate follow-up issue/PR for hardening or behavior changes. This applies broadly to Rust files across the repo; during review, focus on lint/style corrections and clearly note any intentional exceptions.
Learnt from: slin1237
Repo: lightseekorg/smg PR: 489
File: protocols/src/responses.rs:928-931
Timestamp: 2026-02-21T02:36:00.882Z
Learning: In Rust code across the repository, use the marker INVARIANT: to document assumptions in safe code. Reserve SAFETY: for explaining why unsafe blocks are sound. This improves clarity of invariants and safety reasoning. Example reference: protocols/src/responses.rs near validate_tool_choice_with_tools().
Learnt from: slin1237
Repo: lightseekorg/smg PR: 489
File: mesh/src/sync.rs:83-83
Timestamp: 2026-02-21T02:37:01.416Z
Learning: General Rust formatting rule: format! with implicit captures only supports simple identifiers, not full expressions like {state.model_id}. For cases where you want to interpolate a field or expression, bind the value first and interpolate the binding, e.g., let model_id = &state.model_id; and then use format!("policy:{}", model_id). In the specific file mesh/src/sync.rs, prefer format!("policy:{}", state.model_id) or bind to a local variable if you need named interpolation, to keep clarity and avoid unintended captures.
Learnt from: zhaowenzi
Repo: lightseekorg/smg PR: 807
File: model_gateway/src/middleware.rs:61-81
Timestamp: 2026-03-18T21:32:00.041Z
Learning: In Rust code using the http crate, HeaderMap::get() is effectively case-insensitive because HeaderName normalizes keys to lowercase on insertion and lookup. Do not require or perform explicit .to_lowercase() before HeaderMap::get() calls. Mark as not a concern for case-sensitivity in lookups; only consider normalization when inserting or comparing via HeaderName, not in lookups.
Learnt from: key4ng
Repo: lightseekorg/smg PR: 867
File: tui/src/app.rs:798-813
Timestamp: 2026-03-22T20:13:55.778Z
Learning: In this repo (lightseekorg/smg), treat the workspace `Cargo.toml`’s `package.rust-version` (MSRV) as the source of truth (e.g., `rust-version = "1.85"`). When reviewing Rust changes, do not flag usage of Rust language/library features that were stabilized on or before the MSRV (e.g., `Option::is_none_or`, stabilized in 1.82, is compatible with an MSRV of 1.85). Always verify the MSRV from the workspace `Cargo.toml` rather than relying on issue templates.
There was a problem hiding this comment.
Good catch — I looked into this but add_special_tokens has fundamentally different semantics across the two backends:
- HuggingFace: controls whether BOS/EOS wrapper tokens are prepended/appended. With
false, all vocabulary tokens are still recognized in the input. - Tiktoken: has no BOS/EOS concept.
encode_ordinaryvsencode_with_special_tokenscontrols whether special-token patterns in the input (e.g.<|im_start|>,<|media_pad|>) are recognized as single tokens or shredded into BPE sub-tokens.
Every caller that encodes post-chat-template text passes false (meaning "don't add BOS/EOS"). If we map that to encode_ordinary in tiktoken, all special tokens inserted by the chat template get split into sub-tokens — e.g. <|endoftext|> becomes 7 tokens (<, |, endo, ft, ext, |, >) instead of one (ID 100257). This breaks inference for all tiktoken-based models.
We can't flip callers to true either — HF tokenizers would then double-add BOS/EOS on post-template text.
A proper fix requires redesigning the Encoder trait to separate "add wrapper tokens" from "recognize special-token patterns". I've updated the comment in the code to document why the parameter is intentionally ignored, and can file a follow-up issue for the trait redesign if you'd like.
There was a problem hiding this comment.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
CatherineSue
left a comment
There was a problem hiding this comment.
Overall LGTM. Left some comments for future improvements.
| .preprocess(&dynamic_images, &model_config.preprocessor_config) | ||
| .map_err(|e| anyhow::anyhow!("Image preprocessing failed: {e}"))?; | ||
| // Run CPU-intensive image preprocessing on a blocking thread pool so it | ||
| // doesn't block the tokio async runtime under concurrent load. |
There was a problem hiding this comment.
FIXME: This is a nice improvement/optimization. From the long term perspective, we should consider making this configuration in the future.
There was a problem hiding this comment.
Added a // TODO: consider making the thread pool size configurable. comment.
|
Hi @Kangyan-Zhou, this PR has merge conflicts that must be resolved before it can be merged. Please rebase your branch: git fetch origin main
git rebase origin/main
# resolve any conflicts, then:
git push --force-with-lease |
Add ModelProcessorSpec and ImagePreProcessor for moonshotai/Kimi-K2.5 so the gRPC PD router can handle multimodal (image) requests. - KimiK25VisionSpec: matches "kimi" + "k2" model IDs, uses <|media_pad|> placeholder (media_placeholder_token_id from config), NaViT-style field layouts identical to Qwen-VL family - KimiK25Processor: wraps QwenVLProcessorBase with Kimi-specific defaults (patch_size=14, merge_size=2, normalization=[0.5,0.5,0.5], max_pixels=3,211,264 from in_patch_limit=16384) - Fix get_zmq_socket import path for sglang main compat Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
…locally When the tokenizer source is a HuggingFace model ID (e.g., "moonshotai/Kimi-K2.5") rather than a local directory, the gRPC router cannot read config.json and preprocessor_config.json from disk. This causes multimodal requests to fail with "Failed to read config.json". Make get_or_load_config async and fall back to downloading the two config files from HF Hub via the new download_files_from_hf helper when the local path doesn't exist. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
Address review findings: - Log errors from HF Hub downloads instead of silently swallowing them - Add explicit error when local model directory exists but config.json is missing (prevents misleading fallback to HF Hub) - Upgrade fallback log from debug to warn for production visibility Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
TiktokenTokenizer::encode() was using encode_ordinary() which ignores special tokens in the input text. This caused chat-template-rendered special tokens like <|media_pad|> to be split into BPE sub-tokens instead of being recognized as single token IDs. Switch to encode_with_special_tokens() unconditionally, matching HuggingFace tokenizer behavior where added special tokens are always recognized in input text. This fixes Kimi-K2.5 multimodal where the chat template inserts <|media_pad|> (ID 163605) but the tokenizer was producing sub-tokens that expand_tokens couldn't find. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
Kimi-K2.5 engine accesses `item.grid_thws` (plural) on MultimodalDataItem, but the gateway was sending `image_grid_thw` (Qwen-VL convention). Rename the key in the processor output and update field_layouts/keep_on_cpu_keys to match. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
Remove QwenVLProcessorBase dependency. Kimi-K2.5's MoonViT expects pixel_values as [N, C, patch_size, patch_size] (4D), not flattened [N, C*T*patch_size*patch_size] (2D) like Qwen-VL. The model's PatchEmbed3d applies Conv2d on each patch directly. Implement smart_resize and extract_patches independently, producing [total_patches, 3*14*14] = [N, 588] patches that the engine reconstructs as [N, 3, 14, 14] for Conv2d input. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
The engine's PatchEmbed3d Conv2d expects 4D input [N, C, H, W] but the gateway was serializing pixel_values as 2D [N, C*patch_size*patch_size]. Store as ndarray::Array4 so the proto shape field is [N, 3, 14, 14], which the engine reconstructs correctly for Conv2d. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
The previous smart_resize (from Qwen-VL) resized images directly to factor-aligned dimensions, stretching the content. The HF Kimi preprocessor instead: 1. Computes scale capped at 1.0 (never upscales) 2. Resizes with BICUBIC interpolation 3. Zero-pads to factor-aligned dimensions This mismatch caused degraded image quality — the model was trained with zero-padded images, not stretched ones. Rewrite to match the HF navit_resize_image + resize_image pipeline exactly. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
download_files_from_hf was silently failing in production (likely hf-hub crate issue). Switch to download_tokenizer_from_hf which already works for tokenizer loading and returns the HF cache directory containing config.json and preprocessor_config.json. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
download_tokenizer_from_hf only downloads tokenizer files (filtered by is_tokenizer_file), not config.json or preprocessor_config.json. Add a dedicated download_model_configs_from_hf that fetches these two files on the first multimodal request. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
Add detailed logging at key points: - Image dimensions, color type, raw bytes size after fetch - Pixel values shape, token counts, first/last pixels, min/max - Serialized pixel_values bytes and shape - Token expansion details (search_token_id, placeholders, offsets) Also use download_model_configs_from_hf and remove dead code. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
KimiK25Processor::preprocess() was reading mean/std from PreProcessorConfig which falls back to CLIP values when the config can't be parsed (Kimi's preprocessor_config.json nests values under media_proc_cfg). This caused images to be normalized with CLIP mean=[0.48,0.46,0.41] std=[0.27,0.26,0.28] instead of Kimi's mean=[0.5,0.5,0.5] std=[0.5,0.5,0.5], producing wrong pixel values that made the model misinterpret images entirely. Use self.default_mean()/default_std() which are hardcoded to the correct Kimi values. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
Instead of hardcoding normalization values, parse the nested media_proc_cfg structure in Kimi's preprocessor_config.json to extract image_mean, image_std, patch_size, and merge_kernel_size. This ensures the correct values are used regardless of how the config is structured. The previous fix hardcoded [0.5,0.5,0.5] in the processor, which worked but would break if values changed. Now from_json() checks for media_proc_cfg when top-level fields are missing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
Revert info-level diagnostic logging back to debug level now that the normalization root cause has been identified and fixed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
- Make from_value consistent with from_json by delegating to it, ensuring nested media_proc_cfg extraction applies to both paths - Add test for encode_with_special_tokens verifying special token strings in input produce single token IDs Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
…tion Two optimizations for the Kimi-K2.5 image preprocessing pipeline: 1. Fuse resize + pad + normalize into a single pass using deinterleave_rgb_to_planes with precomputed scale/bias. Eliminates 2 intermediate Array3 allocations and 2 extra passes over pixel data. 2. Replace per-element scalar indexing in extract_patches with row-based extend_from_slice (14-element memcpy per row), enabling compiler auto-vectorization. Also take upstream multimodal.rs which has resolve_model_config_dir and updated image_processor_registry.find() API. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
Log timing breakdown: image fetch, config load, preprocessing, token expansion, and assembly/serialization. This helps identify which step dominates TTFT for multimodal gRPC requests. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
…inputs Two optimizations to reduce gRPC multimodal TTFT: 1. Move image preprocessing (resize + pad + normalize + patchify) to tokio::task::spawn_blocking so CPU-intensive work doesn't block the async runtime. Under 200 concurrent requests, this prevents serialized preprocessing from inflating tail latencies. 2. Strip mm_inputs from decode worker requests in PD dual dispatch. The decode worker only needs the KV cache from prefill — sending ~40MB of pixel tensors to it was pure waste. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
Replace image::resize_exact(CatmullRom) with transforms::resize() which uses fast_image_resize (AVX2/SSE4 SIMD). This is a drop-in replacement that gives 3-5x faster BICUBIC resize — the dominant CPU cost in the preprocessing pipeline. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
Remove info-level timing logs (fetch_ms, config_ms, preprocess_ms, expand_ms, serialize_ms) now that performance analysis is complete. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
- Remove dead download_model_configs_from_hf (replaced by upstream resolve_model_config_dir) - Extract in_patch_limit/patch_limit_on_one_side from media_proc_cfg into config.extra, read in from_preprocessor_config - Always check media_proc_cfg for all fields, not just when image_mean/std are missing (fixes partial config overlap) - Log warning when placeholder_token_id fails instead of silent .ok() - Add config_model_type fallback to KimiK25VisionSpec::matches - Add tests: 1x1 image, empty batch, from_preprocessor_config limits - Improve tiktoken encode comment explaining why encode_with_special_tokens is used Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
Make Kimi-K2.5 code comments self-contained instead of comparing against Qwen-VL internals. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
…arse_mm_inputs sglang v0.5.10 mm_utils.has_shm_features() accesses req.mm_inputs.mm_items via attribute, which fails when mm_inputs is a plain dict. Return a proper MultimodalInputs dataclass to fix the AttributeError crash on VLM requests in gRPC mode. Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
be79785 to
518fc42
Compare
|
Hi @Kangyan-Zhou, the DCO sign-off check has failed. All commits must include a To fix existing commits: # Sign off the last N commits (replace N with the number of unsigned commits)
git rebase HEAD~N --signoff
git push --force-with-leaseTo sign off future commits automatically:
|
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (7)
crates/tokenizer/src/tiktoken.rs (1)
439-455:⚠️ Potential issue | 🟠 MajorRespect
add_special_tokensinencode().Always calling
encode_with_special_tokens()changes raw-text tokenization for callers that explicitly passfalse, so this silently rewrites token counts and literal special-token handling. If the multimodal path needs placeholder recognition independently of BOS/EOS behavior, that needs a separate knob; the new test on Line 837 should also flip totrueonce this is fixed.🔧 Proposed fix
- fn encode(&self, input: &str, _add_special_tokens: bool) -> Result<Encoding> { + fn encode(&self, input: &str, add_special_tokens: bool) -> Result<Encoding> { @@ - let tokens = self.tokenizer.encode_with_special_tokens(input); + let tokens = if add_special_tokens { + self.tokenizer.encode_with_special_tokens(input) + } else { + self.tokenizer.encode_ordinary(input) + }; Ok(Encoding::Tiktoken(tokens)) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/tokenizer/src/tiktoken.rs` around lines 439 - 455, Summary: encode() currently always calls encode_with_special_tokens and ignores the add_special_tokens flag, altering tokenization for callers that pass false. Fix: in fn encode(&self, input: &str, _add_special_tokens: bool) -> Result<Encoding>, honor the add_special_tokens parameter by calling self.tokenizer.encode_with_special_tokens(input) when add_special_tokens is true, and call self.tokenizer.encode_ordinary(input) when add_special_tokens is false; return Encoding::Tiktoken(tokens) as before. Update any tests that assumed the current behavior (the new test that expects encode to pass add_special_tokens=false should be flipped to true once this is implemented). Ensure function and tokenizer method names referenced are encode, encode_with_special_tokens, encode_ordinary, and Encoding::Tiktoken.crates/multimodal/src/vision/preprocessor_config.rs (1)
253-259:⚠️ Potential issue | 🟡 MinorNested
media_proc_cfg.patch_sizeonly supports integer form.The extraction logic handles only the integer form (
v.as_u64()), but ignores object-form patch size like{"height":14,"width":14}which is supported elsewhere in this file viadeserialize_patch_size. This could silently fall back to defaults.🔧 Proposed fix to handle both forms
if config.patch_size.is_none() { config.patch_size = media_cfg.get("patch_size").and_then(|v| { - v.as_u64().map(|ps| PatchSize { - height: Some(ps as u32), - width: Some(ps as u32), - }) + v.as_u64() + .and_then(|ps| u32::try_from(ps).ok()) + .map(|ps| PatchSize { + height: Some(ps), + width: Some(ps), + }) + .or_else(|| { + let h = v.get("height")?.as_u64().and_then(|x| u32::try_from(x).ok())?; + let w = v.get("width")?.as_u64().and_then(|x| u32::try_from(x).ok())?; + Some(PatchSize { + height: Some(h), + width: Some(w), + }) + }) }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/multimodal/src/vision/preprocessor_config.rs` around lines 253 - 259, The current logic that sets config.patch_size only handles integer form via media_cfg.get("patch_size").and_then(|v| v.as_u64()...), so object-form values like {"height":14,"width":14} are ignored; update the branch that fills config.patch_size to first try the integer extraction (as_u64) and if that fails attempt to deserialize the value into PatchSize using the existing deserialize_patch_size helper (or serde_json::from_value) so both numeric and object representations are supported for config.patch_size when reading media_cfg.get("patch_size").crates/multimodal/src/vision/processors/kimi_k25.rs (5)
96-126:⚠️ Potential issue | 🟠 MajorAlignment padding can push token count beyond limits.
The scale
s1is computed using unaligned patch counts (based onpatch_size=14), but the finalnum_tokensuses aligned dimensions (based onfactor=28). The zero-padding rounds up each dimension by up tofactor-1pixels, potentially exceedingin_patch_limitnear the boundary.Consider adding a post-alignment check or iteratively reducing dimensions if the final token count exceeds the limit.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/multimodal/src/vision/processors/kimi_k25.rs` around lines 96 - 126, compute_resize_config currently computes scale from unaligned patch counts (using patch_size and in_patch_limit) but then pads to factor alignment (factor()) which can increase num_tokens beyond in_patch_limit; update compute_resize_config to perform a post-alignment check after computing pad_width/pad_height/token_width/token_height/num_tokens and, if num_tokens > self.in_patch_limit, iteratively reduce new_w and/or new_h (e.g., decrement by factor or reduce scale) and recompute padding and token counts until num_tokens <= self.in_patch_limit (ensure new_w/new_h remain >=1), then return ResizeConfig; keep references to patch_size, in_patch_limit, factor(), new_w/new_h, pad_width/pad_height, token_width/token_height, and num_tokens so reviewers can find the change.
184-190: 🧹 Nitpick | 🔵 TrivialConsider returning
TransformErrorinstead of panicking on shape mismatch.While the
#[expect(clippy::expect_used)]annotation documents the invariant, the repo guideline prefers returning recoverable errors over panics in production code. The shape is guaranteed by construction, so failure here indicates a bug, but aTransformErrorwould allow cleaner error propagation.💡 Alternative using map_err
- #[expect( - clippy::expect_used, - reason = "data has exactly 3*canvas_h*canvas_w elements by construction" - )] - Array3::from_shape_vec((3, canvas_h, canvas_w), data) - .expect("shape matches pre-allocated buffer") + // INVARIANT: data has exactly 3*canvas_h*canvas_w elements by construction + Array3::from_shape_vec((3, canvas_h, canvas_w), data).map_err(|e| { + TransformError::ShapeError(format!( + "Failed to create canvas tensor [3, {canvas_h}, {canvas_w}]: {e}" + )) + })Note: This would require changing the return type to
Result<Array3<f32>, TransformError>.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/multimodal/src/vision/processors/kimi_k25.rs` around lines 184 - 190, Change the panic on shape mismatch to return a TransformError: update the function that currently returns Array3::from_shape_vec((3, canvas_h, canvas_w), data).expect(...) to return Result<Array3<f32>, TransformError>, remove the #[expect(...)] annotation, and replace .expect("shape matches pre-allocated buffer") with a map_err that converts the shape/Vec error into a TransformError (include context like "invalid shape when constructing Array3 in kimi_k25"). Keep the original dimensions (3, canvas_h, canvas_w) and ensure call sites are updated to handle the Result.
263-286:⚠️ Potential issue | 🟠 MajorInconsistent config usage: resize uses
self.*, normalization uses passedconfig.
preprocess()readsimage_mean/image_stdfrom the passedconfig(lines 255-256), butcompute_resize_configat line 265 usesself.*fields. If the processor instance is reused with varying configs, the resize dimensions will be stale while normalization will be fresh.For consistency, either:
- Always use
self.*fields and ignore the passed config entirely, or- Create a runtime config from the passed
PreProcessorConfigviafrom_preprocessor_config(config)and use its values throughout.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/multimodal/src/vision/processors/kimi_k25.rs` around lines 263 - 286, preprocess() is mixing config sources: normalization uses the passed PreProcessorConfig (image_mean/image_std) while compute_resize_config() reads self.* fields, causing stale resize behavior when the processor is reused; to fix, create a runtime config from the passed config (e.g. via from_preprocessor_config(config)) at the start of preprocess() and use that runtime config for compute_resize_config(), resize_pad_and_normalize(), num_tokens, and any other parameters (instead of self.*), ensuring compute_resize_config, resize_pad_and_normalize, extract_patches, and grid/num_tokens calculations all use the same config values.
210-216: 🧹 Nitpick | 🔵 TrivialSame
expect()concern applies here.The
as_standard_layout()does guarantee contiguous memory, but per the same recommendation above, converting to aResultreturn would improve error handling consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/multimodal/src/vision/processors/kimi_k25.rs` around lines 210 - 216, Replace the panic-producing expect on flat.as_slice() with proper error propagation: change the surrounding function to return Result, replace the `.as_slice().expect("as_standard_layout ...")` with `.as_slice().ok_or_else(|| /* appropriate error e.g. anyhow::anyhow!("non-contiguous memory despite as_standard_layout") or a crate-specific error variant */)?`, and propagate that Result up (using `?`) so `data` is a `&[T]` without panicking; reference the calls `as_standard_layout()`, `flat.as_slice()`, and the `data` binding when making the change.
314-317:⚠️ Potential issue | 🟡 Minor
calculate_num_tokensignores theconfigparameter.The method ignores the passed
PreProcessorConfigand usesself.*fields instead. This could cause token count mismatches if the caller passes a config different from what was used to construct the processor instance.For consistency with
preprocess(), either:
- Document that
configis intentionally unused (and consider removing the parameter if the trait allows), or- Use
Self::from_preprocessor_config(config).compute_resize_config(...)to ensure consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/multimodal/src/vision/processors/kimi_k25.rs` around lines 314 - 317, The calculate_num_tokens method currently ignores the passed PreProcessorConfig; update it to construct a processor from the provided config and then call compute_resize_config so token counts match preprocess(): replace the direct use of self with a processor built via Self::from_preprocessor_config(config) and call .compute_resize_config(width as usize, height as usize).num_tokens (or alternatively document/remove the unused config if the trait permits), ensuring you reference calculate_num_tokens, PreProcessorConfig, compute_resize_config, Self::from_preprocessor_config and preprocess() when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/multimodal/src/vision/preprocessor_config.rs`:
- Around line 285-288: The from_value method currently serializes the
serde_json::Value to a string and delegates to from_json, which is inefficient;
extract the parsing logic that handles the nested media_proc_cfg into a shared
helper that accepts &serde_json::Value (e.g., parse_from_value or
from_value_inner) and performs deserialization directly from the Value for
fields used in PreprocessorConfig, then change from_value to call that helper
and have from_json reuse the same helper by parsing the string into a Value and
delegating to it; update or keep the same Result<Self, serde_json::Error> return
type and reference the existing function names from_value and from_json and the
media_proc_cfg handling to locate where to move the logic.
- Around line 519-548: In test_parse_kimi_nested_media_proc_cfg add assertions
that the two extra fields were captured in PreProcessorConfig::extra: check that
config.extra contains "in_patch_limit" with value 16384 and
"patch_limit_on_one_side" with value 512 (e.g., via asserts that lookup those
keys in config.extra and compare to the expected integers), so the test verifies
extraction of those fields alongside get_image_mean/get_image_std/get_patch_size
and merge_size.
---
Duplicate comments:
In `@crates/multimodal/src/vision/preprocessor_config.rs`:
- Around line 253-259: The current logic that sets config.patch_size only
handles integer form via media_cfg.get("patch_size").and_then(|v|
v.as_u64()...), so object-form values like {"height":14,"width":14} are ignored;
update the branch that fills config.patch_size to first try the integer
extraction (as_u64) and if that fails attempt to deserialize the value into
PatchSize using the existing deserialize_patch_size helper (or
serde_json::from_value) so both numeric and object representations are supported
for config.patch_size when reading media_cfg.get("patch_size").
In `@crates/multimodal/src/vision/processors/kimi_k25.rs`:
- Around line 96-126: compute_resize_config currently computes scale from
unaligned patch counts (using patch_size and in_patch_limit) but then pads to
factor alignment (factor()) which can increase num_tokens beyond in_patch_limit;
update compute_resize_config to perform a post-alignment check after computing
pad_width/pad_height/token_width/token_height/num_tokens and, if num_tokens >
self.in_patch_limit, iteratively reduce new_w and/or new_h (e.g., decrement by
factor or reduce scale) and recompute padding and token counts until num_tokens
<= self.in_patch_limit (ensure new_w/new_h remain >=1), then return
ResizeConfig; keep references to patch_size, in_patch_limit, factor(),
new_w/new_h, pad_width/pad_height, token_width/token_height, and num_tokens so
reviewers can find the change.
- Around line 184-190: Change the panic on shape mismatch to return a
TransformError: update the function that currently returns
Array3::from_shape_vec((3, canvas_h, canvas_w), data).expect(...) to return
Result<Array3<f32>, TransformError>, remove the #[expect(...)] annotation, and
replace .expect("shape matches pre-allocated buffer") with a map_err that
converts the shape/Vec error into a TransformError (include context like
"invalid shape when constructing Array3 in kimi_k25"). Keep the original
dimensions (3, canvas_h, canvas_w) and ensure call sites are updated to handle
the Result.
- Around line 263-286: preprocess() is mixing config sources: normalization uses
the passed PreProcessorConfig (image_mean/image_std) while
compute_resize_config() reads self.* fields, causing stale resize behavior when
the processor is reused; to fix, create a runtime config from the passed config
(e.g. via from_preprocessor_config(config)) at the start of preprocess() and use
that runtime config for compute_resize_config(), resize_pad_and_normalize(),
num_tokens, and any other parameters (instead of self.*), ensuring
compute_resize_config, resize_pad_and_normalize, extract_patches, and
grid/num_tokens calculations all use the same config values.
- Around line 210-216: Replace the panic-producing expect on flat.as_slice()
with proper error propagation: change the surrounding function to return Result,
replace the `.as_slice().expect("as_standard_layout ...")` with
`.as_slice().ok_or_else(|| /* appropriate error e.g.
anyhow::anyhow!("non-contiguous memory despite as_standard_layout") or a
crate-specific error variant */)?`, and propagate that Result up (using `?`) so
`data` is a `&[T]` without panicking; reference the calls
`as_standard_layout()`, `flat.as_slice()`, and the `data` binding when making
the change.
- Around line 314-317: The calculate_num_tokens method currently ignores the
passed PreProcessorConfig; update it to construct a processor from the provided
config and then call compute_resize_config so token counts match preprocess():
replace the direct use of self with a processor built via
Self::from_preprocessor_config(config) and call .compute_resize_config(width as
usize, height as usize).num_tokens (or alternatively document/remove the unused
config if the trait permits), ensuring you reference calculate_num_tokens,
PreProcessorConfig, compute_resize_config, Self::from_preprocessor_config and
preprocess() when making the change.
In `@crates/tokenizer/src/tiktoken.rs`:
- Around line 439-455: Summary: encode() currently always calls
encode_with_special_tokens and ignores the add_special_tokens flag, altering
tokenization for callers that pass false. Fix: in fn encode(&self, input: &str,
_add_special_tokens: bool) -> Result<Encoding>, honor the add_special_tokens
parameter by calling self.tokenizer.encode_with_special_tokens(input) when
add_special_tokens is true, and call self.tokenizer.encode_ordinary(input) when
add_special_tokens is false; return Encoding::Tiktoken(tokens) as before. Update
any tests that assumed the current behavior (the new test that expects encode to
pass add_special_tokens=false should be flipped to true once this is
implemented). Ensure function and tokenizer method names referenced are encode,
encode_with_special_tokens, encode_ordinary, and Encoding::Tiktoken.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: c3e68e81-085a-4618-bd14-548408c01844
📒 Files selected for processing (11)
crates/multimodal/src/registry/kimi_k25.rscrates/multimodal/src/registry/mod.rscrates/multimodal/src/vision/image_processor.rscrates/multimodal/src/vision/preprocessor_config.rscrates/multimodal/src/vision/processors/kimi_k25.rscrates/multimodal/src/vision/processors/mod.rscrates/tokenizer/src/tiktoken.rsgrpc_servicer/smg_grpc_servicer/sglang/servicer.pymodel_gateway/src/routers/grpc/common/stages/request_execution.rsmodel_gateway/src/routers/grpc/multimodal.rsmodel_gateway/src/routers/grpc/proto_wrapper.rs
- Move DynamicImage cloning inside spawn_blocking so the expensive deep-copy runs on the blocking thread pool instead of the tokio async runtime. - Document why tiktoken's encode() intentionally ignores the add_special_tokens parameter (semantic mismatch with HF backend). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
- Refactor preprocessor_config: extract nested media_proc_cfg logic into shared apply_nested_media_cfg() to avoid round-trip serialization in from_value(). - Add TODO for configurable blocking thread pool size. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Kangyan Zhou <zky314343421@gmail.com>
518fc42 to
c42b225
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
crates/multimodal/src/vision/preprocessor_config.rs (2)
267-280:⚠️ Potential issue | 🟠 MajorHandle nested
patch_sizeobject form and avoid unchecked narrowing casts.
media_proc_cfg.patch_sizecurrently only accepts scalar form, andas u32/as usizecan silently truncate oversized values. That can produce invalid preprocessing dimensions.🔧 Proposed fix
if config.patch_size.is_none() { config.patch_size = media_cfg.get("patch_size").and_then(|v| { - v.as_u64().map(|ps| PatchSize { - height: Some(ps as u32), - width: Some(ps as u32), - }) + v.as_u64() + .and_then(|ps| u32::try_from(ps).ok()) + .map(|ps| PatchSize { + height: Some(ps), + width: Some(ps), + }) + .or_else(|| { + let h = v + .get("height")? + .as_u64() + .and_then(|x| u32::try_from(x).ok())?; + let w = v + .get("width") + .and_then(|x| x.as_u64()) + .and_then(|x| u32::try_from(x).ok()) + .unwrap_or(h); + Some(PatchSize { + height: Some(h), + width: Some(w), + }) + }) }); } if config.merge_size.is_none() { config.merge_size = media_cfg .get("merge_kernel_size") .and_then(|v| v.as_u64()) - .map(|v| v as usize); + .and_then(|v| usize::try_from(v).ok()); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/multimodal/src/vision/preprocessor_config.rs` around lines 267 - 280, The patch_size and merge_size population logic must accept either a scalar or an object form and must avoid unchecked narrowing casts; update the block that sets config.patch_size and config.merge_size (referencing config.patch_size, PatchSize, media_cfg, and merge_kernel_size) to: 1) if media_cfg.get("patch_size") is an object, read "height" and "width" fields individually (falling back to a scalar if present), 2) when converting numeric values use checked conversion (e.g., u64.try_into().ok()) or explicit bounds checks and only assign PatchSize when conversions succeed, and 3) for merge_size convert the u64 to usize using a safe try_into().ok() and skip/return an error if conversion would overflow rather than using as u32/as usize.
521-550:⚠️ Potential issue | 🟡 MinorExpand the Kimi nested-config test to assert extracted limits.
The test includes
in_patch_limitandpatch_limit_on_one_sidein JSON but does not verify they were captured intoextra.🧪 Proposed test assertions
let std = config.get_image_std(); assert!((std[0] - 0.5).abs() < 1e-6); + assert!((std[1] - 0.5).abs() < 1e-6); + assert!((std[2] - 0.5).abs() < 1e-6); assert_eq!(config.get_patch_size(0), 14); assert_eq!(config.merge_size, Some(2)); + let in_patch_limit: Option<usize> = config.get_extra("in_patch_limit"); + assert_eq!(in_patch_limit, Some(16384)); + let patch_limit_on_one_side: Option<usize> = config.get_extra("patch_limit_on_one_side"); + assert_eq!(patch_limit_on_one_side, Some(512)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/multimodal/src/vision/preprocessor_config.rs` around lines 521 - 550, Update the test_parse_kimi_nested_media_proc_cfg test to also assert that the nested media_proc_cfg limits are preserved: after creating config via PreProcessorConfig::from_json, check that config.extra (or the field that stores parsed extra settings) contains "in_patch_limit" == 16384 and "patch_limit_on_one_side" == 512; add assertions that extract these keys from config.extra and compare them to the expected integer values so the test verifies the limits were captured.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@model_gateway/src/routers/grpc/multimodal.rs`:
- Around line 704-708: Change the fast-path pixel serialization to only call
preprocessed.pixel_values.as_slice() and remove the fallback to
as_slice_memory_order(); specifically, update the logic that builds pixel_bytes
(the let pixel_bytes: Vec<u8> block) so it uses as_slice() alone and falls back
to the existing .iter() path for non-C-contiguous arrays, ensuring
preprocessed.pixel_values.as_slice_memory_order() is no longer invoked and the
.iter()-based branch handles all non-contiguous cases.
---
Duplicate comments:
In `@crates/multimodal/src/vision/preprocessor_config.rs`:
- Around line 267-280: The patch_size and merge_size population logic must
accept either a scalar or an object form and must avoid unchecked narrowing
casts; update the block that sets config.patch_size and config.merge_size
(referencing config.patch_size, PatchSize, media_cfg, and merge_kernel_size) to:
1) if media_cfg.get("patch_size") is an object, read "height" and "width" fields
individually (falling back to a scalar if present), 2) when converting numeric
values use checked conversion (e.g., u64.try_into().ok()) or explicit bounds
checks and only assign PatchSize when conversions succeed, and 3) for merge_size
convert the u64 to usize using a safe try_into().ok() and skip/return an error
if conversion would overflow rather than using as u32/as usize.
- Around line 521-550: Update the test_parse_kimi_nested_media_proc_cfg test to
also assert that the nested media_proc_cfg limits are preserved: after creating
config via PreProcessorConfig::from_json, check that config.extra (or the field
that stores parsed extra settings) contains "in_patch_limit" == 16384 and
"patch_limit_on_one_side" == 512; add assertions that extract these keys from
config.extra and compare them to the expected integer values so the test
verifies the limits were captured.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: dc4b2606-6e4c-4593-9486-b7b1d7af43a9
📒 Files selected for processing (3)
crates/multimodal/src/vision/preprocessor_config.rscrates/tokenizer/src/tiktoken.rsmodel_gateway/src/routers/grpc/multimodal.rs
| let pixel_bytes: Vec<u8> = if let Some(pixel_slice) = preprocessed | ||
| .pixel_values | ||
| .as_slice() | ||
| .or_else(|| preprocessed.pixel_values.as_slice_memory_order()) | ||
| { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify whether producers of pixel_values can generate non-standard layouts
# that would make as_slice_memory_order() ordering-sensitive.
rg -n --type=rust -C2 'pixel_values|permuted_axes|reversed_axes|as_standard_layout|into_shape|from_shape_vec|to_shape' crates/multimodal/src
# Re-check serializer logic in this file.
rg -n --type=rust -C4 'fn serialize_pixel_values|as_slice_memory_order|as_slice\(' model_gateway/src/routers/grpc/multimodal.rsRepository: lightseekorg/smg
Length of output: 46674
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check the full serialize_pixel_values function to see current fallback logic
head -n 730 model_gateway/src/routers/grpc/multimodal.rs | tail -n +700
# Verify if any processor explicitly creates non-C-contiguous or permuted arrays
rg -A5 'from_shape_vec|into_shape|into_dyn|Array.*::zeros' crates/multimodal/src/vision/processors/ | head -50Repository: lightseekorg/smg
Length of output: 5523
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Search for any code that might produce non-C-contiguous pixel_values
# (e.g., transpose, permute, slice operations without calling as_standard_layout)
rg -n 'permute|transpose|reversed_axes|to_owned|into_dyn' crates/multimodal/src/vision/processors/ | grep -v test | head -20Repository: lightseekorg/smg
Length of output: 2203
Remove as_slice_memory_order() fallback from pixel serialization.
The fast path should only use as_slice() (which guarantees C-contiguous/row-major layout). The existing comment at lines 722–724 explicitly warns that as_slice_memory_order() can serialize Fortran-contiguous arrays in wrong dimension order, corrupting pixel tensor layout. This fallback is unnecessary since the .iter() path already handles non-C-contiguous cases correctly.
Proposed fix
let pixel_bytes: Vec<u8> = if let Some(pixel_slice) = preprocessed
.pixel_values
- .as_slice()
- .or_else(|| preprocessed.pixel_values.as_slice_memory_order())
+ .as_slice()
{🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@model_gateway/src/routers/grpc/multimodal.rs` around lines 704 - 708, Change
the fast-path pixel serialization to only call
preprocessed.pixel_values.as_slice() and remove the fallback to
as_slice_memory_order(); specifically, update the logic that builds pixel_bytes
(the let pixel_bytes: Vec<u8> block) so it uses as_slice() alone and falls back
to the existing .iter() path for non-C-contiguous arrays, ensuring
preprocessed.pixel_values.as_slice_memory_order() is no longer invoked and the
.iter()-based branch handles all non-contiguous cases.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c42b2255ec
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| let pixel_bytes: Vec<u8> = if let Some(pixel_slice) = preprocessed | ||
| .pixel_values | ||
| .as_slice() | ||
| .or_else(|| preprocessed.pixel_values.as_slice_memory_order()) |
There was a problem hiding this comment.
Keep logical tensor order for non-standard pixel buffers
The new as_slice_memory_order() fallback can serialize contiguous-but-nonstandard ndarray layouts (e.g., Fortran-order / transposed views) in memory order instead of logical row-major order, while still sending the original shape. In that case the receiver reconstructs the tensor with permuted values, producing incorrect pixel data for vision models. The previous fallback (iter()) preserved logical order; this optimization should only apply to standard-layout slices.
Useful? React with 👍 / 👎.
Summary
Add multimodal (image) support for moonshotai/Kimi-K2.5 in the gRPC PD router, matching the HTTP path's accuracy and improving TTFT at high concurrency.
<|media_pad|>placeholder,grid_thwsfield layout,media_placeholder_token_idfrom config[N, 3, 14, 14]patches for MoonViT's Conv2dmedia_proc_cfgfrom Kimi's non-standard preprocessor_config.jsonencode_with_special_tokensso chat template special tokens (e.g.,<|media_pad|>) are recognized as single token IDsdownload_model_configs_from_hffetches config.json + preprocessor_config.json when not available locallyValidation
Test plan
cargo test -p llm-multimodal -- kimi(17 tests)cargo test -p llm-tokenizer(103 tests including special token encoding)pre-commit run --all-filesclean🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Improvements