WIP: multiturn UI#1135
WIP: multiturn UI#1135leonardmq wants to merge 6 commits intoleonard/kil-447-feat-stream-multiturn-ai-sdk-openai-protocolsfrom
Conversation
|
Important Review skippedIgnore keyword(s) in the title. ⛔ Ignored keywords (2)
Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request lays the groundwork for interactive, multi-turn conversations within the application. It enhances the core task execution logic to maintain conversation context across multiple turns and introduces streaming capabilities for real-time feedback. The user interface has been updated to reflect these new conversational features, providing a more dynamic interaction experience. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
📊 Coverage ReportOverall Coverage: 91% Diff: origin/leonard/kil-461-feat-nesting-task-runs-into-each-other...HEAD
Summary
Line-by-lineView line-by-line diff coveragelibs/core/kiln_ai/adapters/adapter_registry.pyLines 21-29 21 from kiln_ai.datamodel.task_output import DataSource
22 from kiln_ai.utils.exhaustive_error import raise_exhaustive_enum_error
23
24 if TYPE_CHECKING:
! 25 from kiln_ai.adapters.model_adapters.base_adapter import (
26 AiSdkStreamResult,
27 OpenAIStreamResult,
28 )Lines 66-74 66 ValueError: If input is not provided
67 """
68 # Validate input is provided
69 if new_input is None:
! 70 raise ValueError("Input is required. Provide new_input as either str or dict.")
71
72 # Resolve task_run to a prior_trace if provided
73 prior_trace = None
74 if task_run is not None:libs/server/kiln_server/run_api.pyLines 370-381 370 )
371
372 input = request.plaintext_input
373 if task.input_schema() is not None:
! 374 input = request.structured_input
375
376 if input is None:
! 377 raise HTTPException(
378 status_code=400,
379 detail="No input provided. Ensure your provided the proper format (plaintext or structured).",
380 )Lines 386-394 386 _, prior_run = task_and_run_from_id(
387 project_id, task_id, request.task_run_id
388 )
389 if prior_run.trace is None:
! 390 raise HTTPException(
391 status_code=400,
392 detail="Cannot continue run: no trace available from the prior run.",
393 )
394 prior_trace = prior_run.traceLines 425-436 425 )
426
427 input = request.plaintext_input
428 if task.input_schema() is not None:
! 429 input = request.structured_input
430
431 if input is None:
! 432 raise HTTPException(
433 status_code=400,
434 detail="No input provided. Ensure your provided the proper format (plaintext or structured).",
435 )Lines 437-453 437 # Continue from prior run if task_run_id is provided
438 prior_trace = None
439 prior_run = None
440 if request.task_run_id is not None:
! 441 _, prior_run = task_and_run_from_id(
442 project_id, task_id, request.task_run_id
443 )
! 444 if prior_run.trace is None:
! 445 raise HTTPException(
446 status_code=400,
447 detail="Cannot continue run: no trace available from the prior run.",
448 )
! 449 prior_trace = prior_run.trace
450
451 stream_result = adapter.invoke_ai_sdk_stream(
452 input,
453 prior_trace=prior_trace,Lines 458-466 458 async for event in stream_result:
459 if isinstance(event, AiSdkStreamEvent):
460 yield f"data: {event.model_dump()}\n\n"
461 else:
! 462 yield f"data: {event}\n\n"
463 yield "data: [DONE]\n\n"
464
465 return StreamingResponse(
466 stream_generator(),
|
There was a problem hiding this comment.
Code Review
This pull request introduces conversation continuation and streaming capabilities across the application. The frontend now features a "Continue Conversation" button and a UI to manage conversation state, loading prior runs, and displaying their traces. The backend includes new API endpoints for OpenAI-style and AI SDK-style streaming, and the existing /run endpoint is enhanced to accept a task_run_id for continuing conversations. A new run_task helper function unifies task execution, supporting synchronous and streaming modes, and handles the prior_trace for conversational context. An improvement opportunity was identified in the run_task helper to provide a more informative error message when a specified task_run_id is not found, by including the actual ID in the error message.
| # Look up by ID | ||
| task_run = datamodel.TaskRun.from_id_and_parent_path( | ||
| task_run, kiln_task.path | ||
| ) | ||
| if task_run is None: | ||
| raise ValueError(f"TaskRun not found: {task_run}") |
There was a problem hiding this comment.
The error message for a not-found TaskRun can be improved. Currently, if a task_run ID is passed as a string and not found, the task_run variable is reassigned to None, and the error message becomes TaskRun not found: None. It would be more helpful to show the ID that was not found.
You can store the ID in a separate variable before looking it up to provide a more informative error message.
| # Look up by ID | |
| task_run = datamodel.TaskRun.from_id_and_parent_path( | |
| task_run, kiln_task.path | |
| ) | |
| if task_run is None: | |
| raise ValueError(f"TaskRun not found: {task_run}") | |
| # Look up by ID | |
| task_run_id = task_run | |
| task_run = datamodel.TaskRun.from_id_and_parent_path( | |
| task_run_id, kiln_task.path | |
| ) | |
| if task_run is None: | |
| raise ValueError(f"TaskRun not found: {task_run_id}") |
… of github.com:Kiln-AI/Kiln into leonard/multiturn-hack-nested
What does this PR do?
Temporary multiturn UI:
Datasetpage