fix(vector_io): wire file_processors provider into vector store file insertion#5339
Merged
franciscojavierarceo merged 4 commits intollamastack:mainfrom Apr 2, 2026
Merged
Conversation
…insertion Vector store file insertion was always using the legacy pypdf chunking path because the configured file_processors provider was never injected into vector_io providers. Add Api.file_processors as an optional dependency and thread it through all provider constructors to the mixin. Signed-off-by: Alina Ryan <aliryan@redhat.com>
a43f5d7 to
c6a1502
Compare
Contributor
|
Recording workflow completed Providers: gpt, azure Recordings have been generated and will be committed automatically by the companion workflow. Fork PR: Recordings will be committed if you have "Allow edits from maintainers" enabled. |
Co-Authored-By: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Contributor
|
✅ Recordings committed successfully Recordings from the integration tests have been committed to this PR. |
fe3dcfc to
0f08163
Compare
Add monkey-patching for PyPDFFileProcessorAdapter.process_file so that the file processor output is recorded during record mode and replayed during replay mode. This avoids running the actual file processor in CI replay tests, eliminating non-determinism from random UUIDs and platform-dependent tokenization. Only intercepts calls with a file_id (internal calls from the vector store mixin). Direct HTTP uploads to the file-processors endpoint pass through to the real provider unmodified. Signed-off-by: Alina Ryan <aliryan@redhat.com>
0f08163 to
c2c6cc6
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
When a file is posted to a vector store via POST /v1/vector_stores/{id}/files, the mixin code checks for a file_processor_api to process the file (e.g., using pypdf or docling for PDF
parsing). However, that attribute was never wired up — no vector_io provider received the file_processors dependency from the resolver. So vector store file insertion always fell
through to the legacy PyPDF chunking path, regardless of which file_processors provider was configured.
This PR fixes that by:
deterministically in CI without running the actual processor
Test plan
Manual verification (local, inline::docling)
Save the following config to
~/.llama/distributions/providers-run/config.yaml:config.yaml
Start the server:
LLAMA_STACK_LOGGING=providers=debug llama stack run ~/.llama/distributions/providers-run/config.yaml --port 8321Upload a PDF, create a vector store, attach the file, and search:
Verify:
Using FileProcessor API to process file