Skip to content

Dataset with tools shouldn't be included in a fine tune when no tools are selected. #1137

Merged
chiang-daniel merged 11 commits intomainfrom
dchiang/filter-ft-dataset-with-skill
Mar 18, 2026
Merged

Dataset with tools shouldn't be included in a fine tune when no tools are selected. #1137
chiang-daniel merged 11 commits intomainfrom
dchiang/filter-ft-dataset-with-skill

Conversation

@chiang-daniel
Copy link
Contributor

@chiang-daniel chiang-daniel commented Mar 17, 2026

What does this PR do?

Previously, leaving tools and skills empty in the fine-tune flow could still include runs with tools and allow reuse of datasets that were not actually tool-free. This was caused by empty selections not being preserved consistently through the UI/API flow, and by mixed tool/skill datasets being collapsed into the same representation as truly tool-free datasets.

Fixed by making empty selections explicit end-to-end, distinguishing mismatched datasets from empty tool sets, and preserving inherited empty tool/skill constraints in SDG

  • Fixed empty tool/skill filtering for fine-tune data and dataset reuse.
  • Fixed skills to follow the same filtering behavior as tools.
  • Fixed mixed tool/skill datasets being incorrectly treated as tool-free.
  • Fixed the UI/API flow so an empty tool/skill selection is preserved end-to-end.
  • Fixed SDG to preserve and lock inherited empty tool/skill selections.

Checklists

  • Tests have been run locally and passed
  • New tests have been added to any work in /lib

Summary by CodeRabbit

  • New Features

    • Added explicit "no tools/skills" filter (empty_tool_filter) for dataset selection.
    • UI preserves and can lock an inherited empty tool/skill state to prevent edits; selection props now consistently pass arrays.
  • Bug Fixes

    • Corrected handling of datasets with missing, empty, or mismatched tool sets and ensured stable transmission of tool/skill params.
    • Download/export paths now treat missing tool info as empty to avoid errors.
  • Tests

    • Expanded synthetic end-to-end tests for dataset selection, downloads, tag computations, and filtering.

@chiang-daniel chiang-daniel requested a review from sfierro March 17, 2026 21:31
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds explicit empty-tool filtering across finetune flows: backend endpoint gains empty_tool_filter, dataset/run filtering now distinguishes None vs [], datamodel and tests adopt None-vs-empty semantics, and frontend preserves and propagates explicit empty inherited tool sets and locks UI accordingly.

Changes

Cohort / File(s) Summary
Finetune API Backend
app/desktop/studio_server/finetune_api.py
Adds empty_tool_filter param to finetune_dataset_info; treats tool_ids is not None as a trigger to filter (allowing [] to mean “no tools”); clarifies compute_finetune_tag_info tool_filter semantics; loads missing skills as empty lists; removes duplicate imports.
Finetune API Tests
app/desktop/studio_server/test_finetune_api.py
Adds synthetic-data fixtures and builders; new tests cover tool/filter permutations, empty-tool behavior, tag counting, and dataset mismatch cases.
Frontend: API types
app/web_ui/src/lib/api_schema.d.ts
Adds optional empty_tool_filter?: boolean to finetune_dataset_info GET query type.
Frontend: Dataset & Finetune flows
app/web_ui/src/routes/(app)/dataset/.../add_data/+page.svelte, app/web_ui/src/routes/(app)/fine_tune/.../create_finetune/+page.svelte, app/web_ui/src/routes/(app)/fine_tune/.../select_finetune_dataset.svelte
Preserves explicit empty fine_tuning_tools when present; always passes a (possibly empty) required_tool_ids array; sends tool_ids when non-empty, omits when undefined, and sends empty_tool_filter: true when the array is explicitly empty.
Frontend: Synthesis Workflow
app/web_ui/src/routes/(app)/generate/.../synth/+page.svelte
Detects presence vs absence of inherited fine_tuning_tools; computes fine_tuning_tools_locked (treats inherited empty set as locked); includes inherited tool state in URL saved-state comparisons and disables relevant selectors when locked.
Data Model: DatasetSplit
libs/core/kiln_ai/datamodel/dataset_split.py
Changes DatasetToolInfo.tools type to `list[str]
Data Model Tests
libs/core/kiln_ai/datamodel/test_dataset_split.py
Updates expectations to reflect None for mismatched tool sets and updated missing-tool semantics.
UI: Run Config Selectors
app/web_ui/src/lib/ui/run_config_component/tools_selector.svelte, .../skills_selector.svelte
When selector is disabled and mandatory_* is empty, clear bound selection arrays to avoid stale selections.

Sequence Diagram(s)

(Skipped)

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • leonardmq
  • scosman
  • sfierro

Poem

🐰 I hop through datasets, tools held tight or free,
Empty means chosen, None means mystery.
Filters now listen when nothing's in sight,
The rabbit nods softly — the logic’s just right. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 8.70% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: preventing datasets with tools from being included in fine-tuning when no tools are selected, which directly aligns with the core purpose of this PR.
Description check ✅ Passed The description provides comprehensive context on what the PR fixes and includes both required checklist items marked as complete. However, it lacks a link to a related GitHub issue in the 'Related Issues' section.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch dchiang/filter-ft-dataset-with-skill
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the accuracy and consistency of dataset selection for fine-tuning, particularly concerning tool-based filtering. It addresses a critical scenario where datasets containing tools might have been erroneously included when a user explicitly requested datasets without any tools. The changes involve a more nuanced interpretation of tool filter parameters in the backend API, a client-side mechanism to correctly transmit 'no tools selected' states, and an update to the data model to better represent tool information within datasets, especially in cases of tool mismatches.

Highlights

  • Refined Tool Filtering Logic: The backend now accurately distinguishes between a request to apply no tool filter (represented by None) and a request to explicitly filter for datasets that contain no tools (represented by an empty list []). This prevents datasets with tools from being included when the user intends to select datasets without any tools.
  • API and Client-Side Tool Filter Handling: A new empty_tool_filter boolean parameter was added to the finetune_dataset_info API endpoint. This parameter, along with corresponding client-side logic, addresses a limitation where openapi-fetch omits empty array query parameters, ensuring that the 'explicitly no tools' filter is correctly communicated to the backend.
  • Enhanced Dataset Tool Information Model: The DatasetToolInfo model was updated to allow its tools field to be None. This None value now specifically indicates that a dataset contains runs with mismatched tool sets, providing a clearer semantic distinction from an empty list [] which signifies that all runs in the dataset consistently have no tools.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • app/desktop/studio_server/finetune_api.py
    • Reordered import statements for better organization.
    • Modified compute_finetune_tag_info to correctly interpret None as no tool filter and [] as an explicit filter for no tools.
    • Added empty_tool_filter parameter to finetune_dataset_info endpoint to handle client-side empty array transmission issues.
    • Updated finetune_dataset_info dataset filtering logic to correctly apply tool filters, including the None vs [] distinction and handling datasets with mismatched tools.
    • Adjusted download_dataset_jsonl to safely handle None for tool_info.tools by providing an empty list.
  • app/web_ui/src/lib/api_schema.d.ts
    • Added empty_tool_filter as an optional boolean query parameter to the finetune_dataset_info operation.
  • app/web_ui/src/routes/(app)/dataset/[project_id]/[task_id]/[run_id]/run/+page.svelte
    • Refactored the computation of properties_for_list into a reactive block for improved Svelte component reactivity.
  • app/web_ui/src/routes/(app)/fine_tune/[project_id]/[task_id]/create_finetune/+page.svelte
    • Modified SelectFinetuneDataset component usage to always pass an array (potentially empty) for required_tool_ids, aligning with new backend logic.
  • app/web_ui/src/routes/(app)/fine_tune/[project_id]/[task_id]/create_finetune/select_finetune_dataset.svelte
    • Implemented client-side logic to send empty_tool_filter: true when required_tool_ids is an empty array, or tool_ids with the actual IDs, to the finetune_dataset_info API.
  • libs/core/kiln_ai/datamodel/dataset_split.py
    • Updated DatasetToolInfo model to allow tools to be None, explicitly indicating mismatched tool sets within a dataset.
    • Modified compute_tool_info to set tools to None when tool mismatches are detected, and to an empty set when no tools are consistently present.
  • libs/core/kiln_ai/datamodel/test_dataset_split.py
    • Updated test cases for compute_tool_info to reflect the new None behavior for tools when mismatches occur or no tools are present.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link

github-actions bot commented Mar 17, 2026

📊 Coverage Report

Overall Coverage: 91%

Diff: origin/main...HEAD

  • app/desktop/studio_server/finetune_api.py (100%)
  • libs/core/kiln_ai/datamodel/dataset_split.py (100%)

Summary

  • Total: 17 lines
  • Missing: 0 lines
  • Coverage: 100%

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly implements a crucial distinction between having no tool filter (None) and explicitly filtering for datasets with no tools ([]). The changes are well-implemented across the Python backend, Svelte frontend, and data models. The introduction of empty_tool_filter is a smart workaround for frontend limitations, and the data model change in DatasetToolInfo to use None for mismatched tools is a good semantic improvement. Overall, this is a solid PR that improves the correctness and clarity of the fine-tuning API. I have one suggestion for a minor refactoring to improve code conciseness.

@chiang-daniel chiang-daniel marked this pull request as ready for review March 17, 2026 22:50
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/desktop/studio_server/finetune_api.py`:
- Line 546: The current call replaces tool_info.tools=None with [] losing the
mixed-vs-empty distinction; change the call to pass tool_info.tools through
unchanged (i.e., remove the "or []" collapse) so load_skills_from_tool_ids
receives None for mixed-tool datasets and [] only when the source truly provides
an empty list, or alternatively update load_skills_from_tool_ids to accept and
distinguish None vs []; specifically, modify the invocation at the top-level
(the expression producing skills_dict) to pass tool_info.tools as-is and ensure
load_skills_from_tool_ids's logic handles None as "mixed" and [] as "no tools".

In
`@app/web_ui/src/routes/`(app)/generate/[project_id]/[task_id]/synth/+page.svelte:
- Around line 229-237: The URL parsing currently maps an explicit
fine_tuning_tools= to [] via has_fine_tuning_tools / fine_tuning_tools_list but
the URL-vs-saved-state equality check omits fine_tuning_tools, so a saved state
with fine_tuning_tools=null can be treated as "same" and override the URL;
update the state-comparison logic to include the fine_tuning_tools semantics
(use has_fine_tuning_tools and/or fine_tuning_tools_list) so that an explicit
empty list (has_fine_tuning_tools=true and fine_tuning_tools_list==[]) is
considered different from null and preserved, and apply the same change to the
equivalent block around the other occurrence (lines ~255-260).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 9141143d-fc0c-423c-8a52-bee9c33a00ef

📥 Commits

Reviewing files that changed from the base of the PR and between e2f7408 and cdeadb5.

📒 Files selected for processing (10)
  • app/desktop/studio_server/finetune_api.py
  • app/desktop/studio_server/test_finetune_api.py
  • app/web_ui/src/lib/api_schema.d.ts
  • app/web_ui/src/routes/(app)/dataset/[project_id]/[task_id]/[run_id]/run/+page.svelte
  • app/web_ui/src/routes/(app)/dataset/[project_id]/[task_id]/add_data/+page.svelte
  • app/web_ui/src/routes/(app)/fine_tune/[project_id]/[task_id]/create_finetune/+page.svelte
  • app/web_ui/src/routes/(app)/fine_tune/[project_id]/[task_id]/create_finetune/select_finetune_dataset.svelte
  • app/web_ui/src/routes/(app)/generate/[project_id]/[task_id]/synth/+page.svelte
  • libs/core/kiln_ai/datamodel/dataset_split.py
  • libs/core/kiln_ai/datamodel/test_dataset_split.py

@chiang-daniel chiang-daniel changed the base branch from sfierro/skills-feedback to main March 17, 2026 23:38
@chiang-daniel chiang-daniel requested a review from leonardmq March 18, 2026 00:15
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
app/desktop/studio_server/finetune_api.py (1)

545-551: Verify the mixed-dataset export guard only blocks generated prompts.

When custom_system_message is present, this endpoint never needs skillssystem_message_from_request() ignores them and DatasetFormatter only consumes the resolved string. The unconditional 400 therefore blocks a path that otherwise has enough information to export. If mixed datasets are only unsupported when Kiln has to infer skills, gate the error to the generator path instead.

💡 Possible adjustment
         tool_info = dataset.tool_info()
-        if tool_info.tools is None:
+        if tool_info.tools is None and not custom_system_message:
             raise HTTPException(
                 status_code=400,
                 detail="Dataset contains mixed tool/skill selections and cannot be exported",
             )
-        skills_dict = load_skills_from_tool_ids(task, tool_info.tools)
+        skills_dict = (
+            {}
+            if tool_info.tools is None
+            else load_skills_from_tool_ids(task, tool_info.tools)
+        )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/desktop/studio_server/finetune_api.py` around lines 545 - 551, The
current guard always raises HTTPException when tool_info.tools is None, which
blocks exports even when a custom_system_message is provided; change the check
to only reject mixed tool/skill datasets when the generator path is used (i.e.,
when no custom_system_message is present). Concretely, in the block using
dataset.tool_info(), only raise the 400 if tool_info.tools is None AND
custom_system_message is falsy, and only call load_skills_from_tool_ids(task,
tool_info.tools) when custom_system_message is falsy so that
system_message_from_request() / DatasetFormatter can proceed without skills.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@app/web_ui/src/routes/`(app)/generate/[project_id]/[task_id]/synth/+page.svelte:
- Around line 229-241: The comparison is order-sensitive because
fine_tuning_tools_list is joined without normalization; sort the list before
creating fine_tuning_tools_key so that order changes don't alter the key (i.e.,
compute fine_tuning_tools_list, then sort it, then set fine_tuning_tools_key =
fine_tuning_tools_list === null ? null : fine_tuning_tools_list.join(","));
apply the same normalization (sort before join) to the analogous skills/tools
keys referenced around the other block (the symbols to update are
fine_tuning_tools_list, fine_tuning_tools_key and the corresponding skills
list/key used at lines ~263-267).

---

Nitpick comments:
In `@app/desktop/studio_server/finetune_api.py`:
- Around line 545-551: The current guard always raises HTTPException when
tool_info.tools is None, which blocks exports even when a custom_system_message
is provided; change the check to only reject mixed tool/skill datasets when the
generator path is used (i.e., when no custom_system_message is present).
Concretely, in the block using dataset.tool_info(), only raise the 400 if
tool_info.tools is None AND custom_system_message is falsy, and only call
load_skills_from_tool_ids(task, tool_info.tools) when custom_system_message is
falsy so that system_message_from_request() / DatasetFormatter can proceed
without skills.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 526f4ca0-b2c5-471d-ab4c-de6e1eaf3848

📥 Commits

Reviewing files that changed from the base of the PR and between 2b90e0e and cd2493c.

📒 Files selected for processing (3)
  • app/desktop/studio_server/finetune_api.py
  • app/desktop/studio_server/test_finetune_api.py
  • app/web_ui/src/routes/(app)/generate/[project_id]/[task_id]/synth/+page.svelte

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
app/web_ui/src/routes/(app)/generate/[project_id]/[task_id]/synth/+page.svelte (1)

238-241: ⚠️ Potential issue | 🟡 Minor

Normalize fine_tuning_tools before comparing state.

This join(",") check is order-sensitive, so the same inherited tool/skill set in a different order still looks like a different session and triggers the replace-session dialog.

💡 Suggested change
+      const normalizeFineTuningTools = (tools: string[] | null) =>
+        tools === null ? null : [...tools].sort().join(",")
+
-      const fine_tuning_tools_key =
-        fine_tuning_tools_list === null
-          ? null
-          : fine_tuning_tools_list.join(",")
+      const fine_tuning_tools_key =
+        normalizeFineTuningTools(fine_tuning_tools_list)
@@
-          ($saved_state.fine_tuning_tools === null
-            ? null
-            : $saved_state.fine_tuning_tools.join(",")) ===
-            fine_tuning_tools_key
+          normalizeFineTuningTools($saved_state.fine_tuning_tools) ===
+            fine_tuning_tools_key

Also applies to: 264-267

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@app/web_ui/src/routes/`(app)/generate/[project_id]/[task_id]/synth/+page.svelte
around lines 238 - 241, The comparison uses fine_tuning_tools_list.join(",")
which is order-sensitive; normalize the list before serializing by sorting or
otherwise canonicalizing it (e.g., use fine_tuning_tools_list.slice().sort()
then join) so identical sets in different orders produce the same
fine_tuning_tools_key; update the creation of fine_tuning_tools_key and the
analogous usage around the other occurrence (the block referenced at lines
264-267) to use the same normalized sorting approach and keep null handling
unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@app/web_ui/src/routes/`(app)/generate/[project_id]/[task_id]/synth/+page.svelte:
- Around line 758-770: The selectors retain stale selections when
mandatory_tools/mandatory_skills becomes an empty array because load_tools()
only watches project_id/task_id; in tools_selector.svelte and
skills_selector.svelte add a reactive statement that sets tools = [] (and skills
= [] respectively) when the corresponding mandatory_tools/mandatory_skills is an
array with length 0 and the locked/disabled flag is true; locate the variables
and existing load_tools() logic (e.g., the load_tools() watcher around line 41)
and add the clear-on-empty reactive check so the locked selector emits an
explicit empty selection instead of stale values.

---

Duplicate comments:
In
`@app/web_ui/src/routes/`(app)/generate/[project_id]/[task_id]/synth/+page.svelte:
- Around line 238-241: The comparison uses fine_tuning_tools_list.join(",")
which is order-sensitive; normalize the list before serializing by sorting or
otherwise canonicalizing it (e.g., use fine_tuning_tools_list.slice().sort()
then join) so identical sets in different orders produce the same
fine_tuning_tools_key; update the creation of fine_tuning_tools_key and the
analogous usage around the other occurrence (the block referenced at lines
264-267) to use the same normalized sorting approach and keep null handling
unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 2f9905ad-e106-474c-b497-2b137d87969d

📥 Commits

Reviewing files that changed from the base of the PR and between cd2493c and df3dbff.

📒 Files selected for processing (1)
  • app/web_ui/src/routes/(app)/generate/[project_id]/[task_id]/synth/+page.svelte

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/web_ui/src/lib/ui/run_config_component/tools_selector.svelte`:
- Around line 45-54: The current reactive block clears selections whenever
tools_selector_settings.disabled is true and mandatory_tools resolves to an
empty array (but mandatory_tools defaults to []), which can wipe valid
selections; change the condition to only treat an "empty lock" as explicit when
the caller actually provided mandatory_tools (e.g.,
tools_selector_settings.mandatory_tools !== undefined) — so update the reactive
statement that references tools_selector_settings and mandatory_tools to check
that mandatory_tools is defined (not just an empty array) before clearing tools,
and ensure you don't persist the cleared value unless this explicit-empty
condition is met.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: ab9d57cd-d3f5-4f68-91db-982c68b03899

📥 Commits

Reviewing files that changed from the base of the PR and between df3dbff and 2a43824.

📒 Files selected for processing (2)
  • app/web_ui/src/lib/ui/run_config_component/skills_selector.svelte
  • app/web_ui/src/lib/ui/run_config_component/tools_selector.svelte

@chiang-daniel chiang-daniel merged commit c13bea0 into main Mar 18, 2026
12 checks passed
@chiang-daniel chiang-daniel deleted the dchiang/filter-ft-dataset-with-skill branch March 18, 2026 02:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants