Skip to content

Conversation

@loci-dev
Copy link

Mirrored from ggml-org/llama.cpp#18353

TL;DR: it's a lot, but there's a lot more testing than before.

Building on the PEG parser infrastructure introduced in #17136 by @aldehir, this is an experimental effort to migrate all chat template formats to the unified PEG approach.

Why migrate? The current monolithic common/chat.cpp has grown to ~25 ad-hoc parser implementations that are difficult to maintain. Lots of parsing bugs are hard to reproduce and diagnose (esp. if the user wasn't in --verbose mode).

The PEG infrastructure offers a cleaner path forward, w/ strong guarantees (modulo bugs) that what is allowed to be generated should be parseable.

How to Test

# Add the branch as a remote
git remote add ochafik https://github.com/ochafik/llama.cpp.git
git fetch ochafik peg-migration-complete
git checkout ochafik/peg-migration-complete

# Build
cmake -B build && cmake --build build -j

# Run llama-server w/ and w/o experimental new parsers
./build/bin/llama-server --experimental-new-parsers --jinja -m your-model.gguf

# Run parser tests (new & legacy)
./build/bin/test-chat

Changes:

  • common/chat-parsers/*.cpp - 28 modular parser implementations
  • Feature flag --experimental-new-parsers - defaults to off, nothing changes by default
    • Kept old codepaths more or less intact to avoid regressions
  • PEG Infrastructure Changes to make it easier to write many PEG parsers (cc/ @aldehir - happy to revert if you disagree with any of these)
    • Enum-based tags - Replaced string AST tags with integer tag IDs for type safety and faster dispatch
    • Lambda mappers - Mapper functions defined as lambdas to reduce boilerplate
    • Shared tag enums - Many parsers now share common tags (CONTENT, REASONING, TOOL_CALL, etc.)

New "Needle" Streaming Tests

Existing streaming tests (tools/server/tests/unit/test_tool_call.py) required loading real models and cover only a subset of formats. This PR adds systematic coverage for all 21 formats without the model-loading overhead.

This migration was designed to be safe through systematic test constraints:

21 formats x 6+ scenarios = up to 126 regression tests (some scenarios filtered based on format capabilities)

Each format tests:

  • Content-only streaming
  • Reasoning/thinking tags
  • Tool calls (single and parallel)
  • Tool choices (none, auto, required)
  • Thinking enabled/disabled

How Needle Tests Work

The "needle" technique injects unique marker pairs into each semantic field. For example, in Hermes 2 Pro format with thinking and a tool call:

<think>Thinking $N1R$ deeply $N2R$ done</think>
Before $N1C$ middle $N2C$ after
<tool_call>{"name":"$N1TN$_0$N2TN$_0","arguments":{"$N1AK$_0$N2AK$_0":"$N1AV$_0$N2AV$_0"}}</tool_call>

The test parses this message at every character boundary (simulating streaming), and verifies:

Check What it catches
Both needles present Content not lost or truncated
N1 before N2 (each pair) Out-of-order emission, lack of streaming, buffering bugs
Tool names atomic Function name never split mid-stream (tool name needles must land together, or none of them)
Arguments never regress Tool args never get shorter during streaming
Keys complete in order Key N finishes before key N+1 starts

This aims to prove parsers are truly incremental: partial input produces partial output, fields stream in proper order, and nothing is buffered unnecessarily.

Known Limitations

The PEG implementation has gaps vs legacy (TBC):

  1. JSON schema edge cases: allOf/anyOf/$ref patterns not fully handled
  2. maxLength grammar growth: Large constraints can explode grammar size (WIP: added until_max w/ weird implementation, maybe we just drop maxLength on xml formats)
  3. Streaming edge cases: Malformed input handled differently
  4. Ambiguous grammars: PEG requires unambiguous grammars

Proposed Migration Plan

  1. Merge with safe default (legacy parsers active)
  2. Gather feedback from users opting in via --experimental-new-parsers
  3. Debug issues as they're reported
  4. Flip default once stable
  5. Drop legacy code entirely:
    • common/chat-parser.cpp: ~28 legacy parser functions (~900 lines)
    • common/chat.cpp: ~19 legacy init functions (~600 lines)
    • common/chat-peg-parser.cpp/.h: class-based builders/mappers (~220 lines)
    • common/chat-parser-xml-toolcall.cpp/.h: XML grammar builder (~900 lines) - new PEG parsers generate grammars directly from their parser definitions

Follow up work

  • Move new capability detections to Minja (minja#20) to simplify the test configuration:
    • supports_tool_call_id - Whether tool calls include IDs
    • reasoning_requires_tools - Whether thinking mode only works with tools
    • tools_emit_content_with_calls - Whether tool calls can include content

@loci-review
Copy link

loci-review bot commented Dec 28, 2025

Explore the complete analysis inside the Version Insights

I've generated a comprehensive summary report for your project. The report shows a detailed performance analysis comparing two versions of the llama.cpp repository (PR #692 from auroralabs-loci).

Key Findings:

The analysis reveals significant performance changes across multiple functions, with the top 10 functions showing increases in response time ranging from 57% to 311%. The most affected areas include:

  1. STL container operations (vector and tree operations)
  2. Multiple binaries affected (llama-cvector-generator, libllama.so, llama-tts, llama-run)
  3. Consistent patterns in destructor operations showing ~83% increases

The report includes detailed metrics for each function, including response times, throughput changes, and specific code locations. It also provides recommendations for investigating memory management, profiling container usage, and reviewing the changes in PR #692.

Would you like me to provide more details about any specific aspect of this report?

@loci-dev loci-dev force-pushed the main branch 9 times, most recently from f2e8c7f to b3f45e1 Compare December 29, 2025 06:15
@loci-dev loci-dev force-pushed the main branch 30 times, most recently from 62bf34b to 10471d1 Compare January 29, 2026 13:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants