Skip to content

feat(embedding): introduce instruction capability flag in model details#980

Open
klamas1 wants to merge 4 commits intoKiln-AI:mainfrom
klamas1:klamas1/Instruct-for-embedding-model
Open

feat(embedding): introduce instruction capability flag in model details#980
klamas1 wants to merge 4 commits intoKiln-AI:mainfrom
klamas1:klamas1/Instruct-for-embedding-model

Conversation

@klamas1
Copy link

@klamas1 klamas1 commented Jan 25, 2026

Add support for custom embedding instructions

Add a new supports_instructions boolean attribute to the EmbeddingModelDetails class, allowing providers to specify whether their embedding models can handle instructional inputs. This enables users to provide custom instructions for advanced embedding models like Qwen3-embedding.

What does this PR do?

This PR implements support for custom embedding instructions in Kiln's RAG system. The feature allows users to provide custom instructions to embedding models that support them (like Qwen3-embedding), enabling more precise control over how documents and queries are vectorized.

Key Changes:

  1. API Schema Updates:

    • Added instructions field to EmbeddingProperties in the datamodel
    • Updated EmbeddingOptions class to include instructions support
    • Modified API responses to include supports_instructions flag for each model
  2. Backend Implementation:

    • Added _apply_instructions_to_texts() method in LitellmEmbeddingAdapter
    • Instructions are applied in Qwen3 format: "Instruct: {instructions}\nQuery: {text}"
    • Instructions are used for both document indexing and query embedding
  3. Frontend Updates:

    • Added "Custom Instructions" textarea in embedding config creation form
    • Field appears only for models that support instructions
    • Updated TypeScript types to include the new API fields
  4. Testing:

    • Added comprehensive tests for instruction application logic
    • Updated existing tests to include supports_instructions field
    • All tests pass, including the full test suite (3740+ tests)
  5. Documentation:

    • Updated memory bank with implementation details
    • Added architecture documentation for the new feature

Usage:

Users can now create embedding configurations with custom instructions:

{
  "name": "Qwen3 with instructions",
  "model_provider_name": "ollama",
  "model_name": "qwen3-embedding:8b",
  "properties": {
    "instructions": "Use semantic similarity for document retrieval. Focus on conceptual meaning rather than exact keyword matches."
  }
}

The instructions will be applied to all texts processed by this embedding model, ensuring consistent semantic understanding across the RAG pipeline.

Related Issues

No specific issues, this is an enhancement to improve embedding quality for advanced models.

Contributor License Agreement

I, @klamas1, confirm that I have read and agree to the Contributors License Agreement.

Checklists

  • Tests have been run locally and passed (3740 passed, 6690 skipped)
  • New tests have been added to any work in /lib (added 3 new tests for instruction logic)
  • Code formatting and linting checks pass
  • Type checking passes
  • Memory bank updated with implementation details

Summary by CodeRabbit

  • New Features
    • Embedding models now advertise a supports_instructions flag and embedding properties may include instructions.
  • UI
    • Conditional "Custom Instructions" field appears for models that support instructions; supplied instructions are included when creating configs.
  • Embedding flow
    • Provided instructions are prepended to input texts before embeddings are generated for supported models.
  • Tests
    • Added unit tests covering instruction application and related behaviors.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @klamas1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the RAG system by introducing the capability to provide custom instructions to embedding models. This feature allows for more precise control over how documents and queries are vectorized, leading to improved semantic understanding and embedding quality, particularly for advanced models that support such inputs.

Highlights

  • Instruction Capability Flag: Introduced a new supports_instructions boolean attribute to EmbeddingModelDetails and KilnEmbeddingModelProvider to indicate if an embedding model can handle instructional inputs. This flag is propagated through the API to the frontend.
  • API Schema and Data Model Updates: Added an optional instructions field to EmbeddingProperties in the data model and EmbeddingOptions in the adapter, allowing users to specify custom instructions for embedding generation.
  • Backend Instruction Application Logic: Implemented _apply_instructions_to_texts in LitellmEmbeddingAdapter to format input texts with instructions (e.g., 'Instruct: {instructions}\nQuery: {text}') before sending them to the embedding model, specifically for models like Qwen3-embedding.
  • Frontend User Interface: Added a 'Custom Instructions' textarea to the embedding configuration creation form in the UI. This field is dynamically displayed only for models that have supports_instructions set to true.
  • Model Configuration Updates: Updated the KilnEmbeddingModel definitions to explicitly mark Qwen3-embedding models as supporting instructions, enabling the new functionality for these advanced models.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 25, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds embedding-instructions support: new boolean supports_instructions on model metadata, an optional instructions property in embedding properties/options, UI to collect instructions when supported, and adapter logic to prepend instructions to texts before calling embedding providers.

Changes

Cohort / File(s) Summary
Server API / Tests
app/desktop/studio_server/provider_api.py, app/desktop/studio_server/test_provider_api.py
Added supports_instructions: bool to embedding model details and populated it from providers; tests updated to expect supports_instructions.
Frontend types & UI
app/web_ui/src/lib/api_schema.d.ts, app/web_ui/src/routes/(app)/docs/rag_configs/[project_id]/create_rag_config/create_embedding_form.svelte
Added supports_instructions: boolean to EmbeddingModelDetails, instructions?: string to EmbeddingProperties; UI adds customInstructions state and a conditional "Custom Instructions" field and includes instructions in create payload when supported.
Core datamodel & provider list
libs/core/kiln_ai/datamodel/embedding.py, libs/core/kiln_ai/adapters/ml_embedding_model_list.py
Added instructions to EmbeddingProperties TypedDict and supports_instructions: bool to KilnEmbeddingModelProvider; set supports_instructions=True for applicable providers.
Embedding adapter & tests
libs/core/kiln_ai/adapters/embedding/litellm_embedding_adapter.py, libs/core/kiln_ai/adapters/embedding/test_litellm_embedding_adapter.py
Extended EmbeddingOptions with optional instructions; added _apply_instructions_to_texts() to prepend instructions to inputs and use processed texts for API calls; added tests covering instruction handling.

Sequence Diagram(s)

sequenceDiagram
  participant UI as Client (UI)
  participant Server as Studio Server
  participant Adapter as LitellmAdapter
  participant API as Embedding Provider API

  UI->>Server: GET available embedding models
  Server-->>UI: models (include supports_instructions)
  UI->>Server: POST create RAG config (properties + instructions if supported)
  Server->>Adapter: build options (may include instructions)
  Adapter->>Adapter: _apply_instructions_to_texts(instructions, texts)
  Adapter->>API: Send processed texts to Embedding Provider API
  API-->>Adapter: Return embeddings
  Adapter-->>Server: Return embeddings/result
  Server-->>UI: Confirm creation / return result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • scosman
  • chiang-daniel
  • tawnymanticore

Poem

🐰
I hop through lines and stitch a clue,
"Instructions here" for vectors new,
I prepend whispers, snug and bright,
So embeddings hum just right,
Hooray — now searches feel more true! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 37.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: introducing an instruction capability flag (supports_instructions) to embedding model details. It accurately reflects the primary objective of the PR.
Description check ✅ Passed The description comprehensively covers the PR objectives, including the what, key changes across multiple layers, usage examples, and test results. Both required checklist items are completed and signed off.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new supports_instructions capability flag for embedding models, enabling users to provide custom instructions for models that support this feature, such as Qwen3-embedding. The changes involve updating API schemas, implementing instruction application logic in the LitellmEmbeddingAdapter, adding corresponding frontend UI elements, and updating model definitions and tests. The implementation appears robust, with appropriate validation and conditional rendering in the UI.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In
`@app/web_ui/src/routes/`(app)/docs/rag_configs/[project_id]/create_rag_config/create_embedding_form.svelte:
- Around line 217-228: The help text for the conditional FormElement is
inconsistent: update the info_description on the FormElement (rendered when
selectedModel && selectedModel.supports_instructions) to use English (or wire it
to the app's i18n system) so it matches the rest of the UI; locate the
FormElement with id="custom_instructions" and replace the Russian string in
info_description with the English equivalent (or a localized lookup) to maintain
consistent locale across selectedModel-related instruction UI.
🧹 Nitpick comments (2)
libs/core/kiln_ai/datamodel/embedding.py (1)

14-16: Add schema metadata (and optional size limits) for instructions.

Right now instructions is an unconstrained str. Consider using Pydantic v2 Annotated + Field to document it and optionally cap size to avoid unbounded payloads.

♻️ Suggested update
-from typing import TYPE_CHECKING, List, Union
+from typing import TYPE_CHECKING, List, Union, Annotated

 class EmbeddingProperties(TypedDict, total=False):
     dimensions: PositiveInt
-    instructions: str
+    instructions: Annotated[
+        str,
+        Field(
+            description="Optional instructions to prepend/format embedding inputs.",
+            max_length=2000,
+        ),
+    ]
libs/core/kiln_ai/adapters/embedding/litellm_embedding_adapter.py (1)

29-37: Consider reusing validated instructions when formatting inputs.
_apply_instructions_to_texts reads raw properties while build_options validates separately; passing the validated value avoids drift and keeps logic in one place.

♻️ Suggested refactor
-    def _apply_instructions_to_texts(self, input_texts: List[str]) -> List[str]:
-        """Apply instructions to input texts in the format expected by the model."""
-        instructions = self.embedding_config.properties.get("instructions", None)
-        if not instructions:
-            return input_texts
+    def _apply_instructions_to_texts(
+        self, input_texts: List[str], instructions: str | None
+    ) -> List[str]:
+        """Apply instructions to input texts in the format expected by the model."""
+        if not instructions:
+            return input_texts
@@
-        # Apply instructions to input texts if present
-        processed_texts = self._apply_instructions_to_texts(input_texts)
+        # Validate once and reuse the same instructions everywhere
+        options = self.build_options()
+        # Apply instructions to input texts if present (validated)
+        processed_texts = self._apply_instructions_to_texts(
+            input_texts, options.instructions
+        )
@@
-        # Get options excluding instructions since they're applied to text
-        options = self.build_options()
+        # Get options excluding instructions since they're applied to text

Also applies to: 139-181, 200-210

@klamas1
Copy link
Author

klamas1 commented Jan 27, 2026

@gemini-code-assist Do you have any recommendations on the PR code?

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the supports_instructions flag to EmbeddingModelDetails and adds support for custom embedding instructions in Kiln's RAG system. It includes API schema updates, backend implementation in LitellmEmbeddingAdapter, frontend updates, testing, and documentation. The changes enable users to provide custom instructions for advanced embedding models like Qwen3-embedding, allowing more precise control over document and query vectorization. The review focuses on correctness and maintainability, identifying opportunities for improved error handling and code clarity.

@leonardmq
Copy link
Collaborator

@klamas1 - Hey thanks for the PR, will give it a try as soon as I can but seems to be looking good!

Do you have an example of a use case where you used custom instructions in an embedding models? Never used it so quite curious.

If you feel like it (totally optional), you could add a "paid test" which is our tests that do real calls to the provider.

There is an example here.

On paid tests, we have these decorators

@pytest.mark.paid
@pytest.mark.parametrize(
    "provider,model_name",
    get_all_embedding_models_and_providers(),
)

The @pytest.mark.paid one means the test only runs when run individually (but not when running ./checks.sh or all the tests); and the parametrize decorator runs it with all the models we have (if the model does not support the thing being tested, call pytest.skip like here)

@leonardmq leonardmq self-assigned this Jan 27, 2026
@leonardmq leonardmq self-requested a review January 27, 2026 18:29
@leonardmq leonardmq removed their assignment Jan 27, 2026
@klamas1
Copy link
Author

klamas1 commented Jan 28, 2026

@leonardmq, thank you for your interest in PR. In fact, quite a few models support the Instruct prefix, at least all based on qwen3-embedding 0.6b, 4b and 8b.

Here is a quote from their documents https://github.com/QwenLM/Qwen3-Embedding:
"Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English."

Just clarifying:
are you suggesting adding yet another test only for models with supports_instructions: true and making it a paid test?

@leonardmq
Copy link
Collaborator

@klamas1 - Sounds great! Code and UI look good.

Couple of things we need to do before we can merge (seems Maintainer Edits are not allowed on your fork so I cannot push there directly myself).

Can you do this:

  1. Cherry-pick this commit (or copy/paste the changes from it) into your branch:
# pushed basic paid test here: https://github.com/Kiln-AI/Kiln/commit/706ab71168a2115509a48aba1d63d4c70e636327
git cherry-pick 706ab71168a2115509a48aba1d63d4c70e636327
  1. You updated the API schema, but seems like the description for some fields changed afterwards. Run cd app/web_ui/src/lib && ./generate_schema.sh to regenerate the file. Then running ./checks.sh will no longer show the warning it currently does (OpenAPI schema is not current. Run generate_schema.sh to update).
  2. Commit and push again

Then should be all good to merge

@klamas1
Copy link
Author

klamas1 commented Jan 28, 2026

hmm, Allow edits by maintainers is On

@klamas1
Copy link
Author

klamas1 commented Jan 28, 2026

@leonardmq is Done


# Format according to Qwen3-Embedding documentation
# "Instruct: [instructions]\nQuery: [text]"
return [f"Instruct: {instructions}\nQuery: {text}" for text in input_texts]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Their docs suggest to that the instructions wrapper should only be used for the actual query (rather than the docs themselves).

Their README shows this:

# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'

queries = [
    get_detailed_instruct(task, 'What is the capital of China?'),
    get_detailed_instruct(task, 'Explain gravity')
]
# No need to add instruction for retrieval documents
documents = [
    "The capital of China is Beijing.",
    "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
]
input_texts = queries + documents

tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-0.6B', padding_side='left')
model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-0.6B')

That suggests we should only apply the instructions during retrieval (when embedding the query) and not during embedding of the document chunks that we index.

Much of the current logic from the PR can be preserved, and we can add a flag to the embedding methods here to specify whether or not to apply the instructions (if any), have it default to False, and pass in True from here during retrieval:

query_embedding_result = await embedding_adapter.generate_embeddings(
  [query],
  apply_embedding_instructions=True,
)

@klamas1 - thoughts?

Looping in @tawnymanticore

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this contribution @klamas1! great idea.

re: should we be conditioning queries or documents on instructions...

agree with @leonardmq. from that readme they are conditioning queries on the instruction so that it will have a better hit rate against documents. their example shows this

conditoned_query = "Instruct: Given a web search query, retrieve relevant passages that answer the query \nQuery:What is the capital of China?"
document = "The capital of China is Beijing"

conditoned_query.dot(document) --> high

so for the indexing portion of RAG, we should be embedding the documents as normal. then building a custom query function that gets some instructions as conditionals.

now that's what Qwen recommends anyways. does the inverse work to save on runtime compute? maaaaybeeee? Would depend if Qwen specifically fine-tuned with an Instruct/Query setup. if they did, then it must be conditoned_query.dot(document) at runtime. if this is zero shot then the inverse may be possible with the following framing

("What is the capital of China?").dot("Instruct: Given a retrieval passage, what was the original web search query? \m Passage: The capital of China is Beijing). this is probably all pretty testable in a python notebook or something

Copy link
Author

@klamas1 klamas1 Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact, in some cases, instructions are also needed when embedding documents, but, indeed, this is an cornercase, such as clustering.
I will add a flag to the search function.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Later, I also want to try HyDE (https://aclanthology.org/2023.acl-long.99/).

I'm diving into this so deeply because my task is very specific; my documents are deeply nested, branched YAML configs with consistency across branches.

Maybe I'm going in the wrong direction, but if you have any ideas, I'd be happy to explore them.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
app/desktop/studio_server/provider_api.py (1)

184-184: EmbeddingModelDetails.supports_instructions` should have a default value for safety and consistency.

supports_instructions: bool is declared without a default in EmbeddingModelDetails, making it required. While both visible construction sites in this file correctly supply the value from provider.supports_instructions / ollama_provider.supports_instructions, adding a default value would align with KilnEmbeddingModelProvider (which has supports_instructions: bool = Field(default=False, ...)) and protect against future code that may construct this model without explicitly passing the field.

Recommended: add a default to `EmbeddingModelDetails.supports_instructions`
 class EmbeddingModelDetails(BaseModel):
     id: str
     name: str
     n_dimensions: int
     max_input_tokens: int | None
     supports_custom_dimensions: bool
     suggested_for_chunk_embedding: bool
-    supports_instructions: bool
+    supports_instructions: bool = False
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/desktop/studio_server/provider_api.py` at line 184, EmbeddingModelDetails
currently declares supports_instructions: bool without a default, making it
required; add a safe default (e.g., False) to the supports_instructions field on
EmbeddingModelDetails so it matches KilnEmbeddingModelProvider and prevents
construction errors if callers omit the field. Update the declaration in
EmbeddingModelDetails (the supports_instructions attribute) to use a default
value (either supports_instructions: bool = False or supports_instructions: bool
= Field(default=False, ...), keeping any existing description metadata
consistent with KilnEmbeddingModelProvider) so existing callers continue to work
and future callers are protected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@app/desktop/studio_server/provider_api.py`:
- Line 184: EmbeddingModelDetails currently declares supports_instructions: bool
without a default, making it required; add a safe default (e.g., False) to the
supports_instructions field on EmbeddingModelDetails so it matches
KilnEmbeddingModelProvider and prevents construction errors if callers omit the
field. Update the declaration in EmbeddingModelDetails (the
supports_instructions attribute) to use a default value (either
supports_instructions: bool = False or supports_instructions: bool =
Field(default=False, ...), keeping any existing description metadata consistent
with KilnEmbeddingModelProvider) so existing callers continue to work and future
callers are protected.

@klamas1
Copy link
Author

klamas1 commented Feb 20, 2026

OMG, I can't commit, I've been defeated by prehooks

I can't overcome mistakes I didn't make

$ cd app/web_ui/src/lib && ./generate_schema.sh

✨ openapi-typescript 7.4.1

🚀 http://localhost:8757/openapi.json → api_schema.d.ts [251.1ms]

but I still get errors
svelte-check found 46 errors and 0 warnings in 10 files

@leonardmq
Copy link
Collaborator

@klamas1 - Can you bypass the precommit hook and commit anyway? (git commit --no-verify -m "YOUR MESSAGE")

I will have a look and commit to your branch (if it lets me) - might be a minor issue with some types that we renamed a couple of days ago.

@klamas1 klamas1 force-pushed the klamas1/Instruct-for-embedding-model branch from 51f6b1f to e3f4829 Compare March 6, 2026 06:50
@klamas1
Copy link
Author

klamas1 commented Mar 6, 2026

@leonardmq I did a commit with --no-verify.

However, I didn't like the solution of passing apply_embedding_instructions through multiple functions, even when it wasn't necessary. I got a little confused with the commits, so it's not in this PR.
I did it a little differently, but it resulted in more code than your suggestion, so I suggest you check out my version here klamas1#2

klamas1 added 4 commits March 6, 2026 21:50
Add a new `supports_instructions` boolean attribute to the `EmbeddingModelDetails` class, allowing providers to specify whether their embedding models can handle instructional inputs. This update is reflected in both the general provider API connection and the Ollama-specific embedding model retrieval functions.
- Add schema constraints for instructions field
- Refactor to eliminate validation duplication
- Update test cases for new method signature
@klamas1 klamas1 force-pushed the klamas1/Instruct-for-embedding-model branch from e3f4829 to c68a576 Compare March 6, 2026 16:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants