Skip to content

Commit f6ff2a0

Browse files
Mijamind719codex
andcommitted
fix: avoid unsupported native local embedding batch mode
Legacy issue: investigate true llama-cpp native multi-sequence batch support for local embedding models such as bge-small-zh-v1.5-f16 (current runtime reports n_seq_max=1, so embed_batch uses sequential mode). Co-authored-by: GPT-5.4 <noreply@openai.com>
1 parent e352097 commit f6ff2a0

File tree

3 files changed

+518
-8
lines changed

3 files changed

+518
-8
lines changed

0 commit comments

Comments
 (0)