You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Task-level (or Project-level) defined tools where we define the tools and their schemas, etc.
The Runs would display tool calls in some way and probably allow editing to correct incorrect tool calls
Describe alternatives you've considered
Not using tools 😄
Additional context
It seems like fine-tuning would be helpful for cases where tool usage depends on nuanced / semi-subjective context. For example, in a RAG setup, a model may need to decide whether to query a vector database or generate an answer directly based on context. Fine-tuning could help improve when and how it makes this decision.
@scosman Currently, MCP is close to the unofficial standard of function calling (similar to openai api — not official, but a de-facto standard). So feels like it's way more worth to implement MCP here.
There is a different examples of how MCP could be supported, i found that it works well with llama's in oterm, maybe you can take a look there as an example of implementation.
But in any way, +1, could be extremely cool to implement it in Kiln, i found that this tool is amazing in terms of no-code fine tuning. So if you'll add MCP — it will be the best tool on the market 😍
Is your feature request related to a problem? Please describe.
Fine-tuning for a Task that requires tool / function calls (as in these examples) is currently not supported.
Checks
Describe the solution you'd like
A rough idea of what that would likely involve:
Task
-level (orProject
-level) definedtools
where we define the tools and their schemas, etc.Describe alternatives you've considered
Not using tools 😄
Additional context
It seems like fine-tuning would be helpful for cases where tool usage depends on nuanced / semi-subjective context. For example, in a RAG setup, a model may need to decide whether to query a vector database or generate an answer directly based on context. Fine-tuning could help improve when and how it makes this decision.
Found this cookbook by OpenAI on fine-tuning for Function Calling: https://cookbook.openai.com/examples/fine_tuning_for_function_calling
The text was updated successfully, but these errors were encountered: