Skip to content

fix(ai-gateway): fix providers dropdown, prompt test, model selector, compare panel#3578

Draft
gorkem-bwl wants to merge 6 commits intodevelopfrom
fix/ai-gateway-providers-endpoint
Draft

fix(ai-gateway): fix providers dropdown, prompt test, model selector, compare panel#3578
gorkem-bwl wants to merge 6 commits intodevelopfrom
fix/ai-gateway-providers-endpoint

Conversation

@gorkem-bwl
Copy link
Copy Markdown
Contributor

Summary

  • Fix empty Provider/Model dropdowns in endpoint creation — Python returned strings instead of { id, mode } objects
  • Implement prompt test streaming endpoint (was 501 stub) — resolves variables, streams LLM response via SSE
  • Make prompt model selector searchable with 2-char threshold (2,500+ models was freezing the dropdown)
  • Move endpoint selector above tabs in prompt editor so Test set and Compare tabs can access it
  • Fix prompt test CORS — use relative URL for fetch to go through Vite proxy
  • Fix Compare panel version/endpoint selects — missing getOptionValue

Test plan

  • Create endpoint: Provider and Model dropdowns populate correctly
  • Prompt editor: model search works, shows filtered results after 2 chars
  • Prompt editor Chat tab: send message, get streaming LLM response
  • Prompt editor Test set tab: endpoint selector visible, Run all works
  • Prompt editor Compare tab: version A/B selectable, endpoint selectable

…rings

The /providers endpoint returned model names as plain strings but the
frontend useGatewayModels() hook expects objects with { id, mode }.
The mode filter (chat/completion) rejected all models because
string.mode is undefined, resulting in empty provider/model dropdowns
in the endpoint creation form.

Now returns { id: model_name, mode: info.mode } for each model.
…oint

## Changes
- Implement /prompts/test endpoint: resolves variables, streams LLM
  response via SSE (was a 501 stub). Uses the same stream_chat_completion
  service as the tenant chat. Frontend's streamPromptTest() reader works
  with this format.
- Fix /providers endpoint: return model objects { id, mode } instead of
  plain strings. The frontend useGatewayModels() hook filters by mode
  (chat/completion) which rejected all models when mode was undefined.
  Fixes empty Provider and Model dropdowns in endpoint creation form.
…reshold

The model dropdown had 2,500+ items from all providers. Replaced the
Select component with a searchable Field that shows a filtered dropdown
only after typing 2+ characters. Shows max 20 results, highlights
current selection. Prevents the browser from freezing on render.
The endpoint selector and variable inputs were inside the Chat tab
only. Test set and Compare tabs couldn't see which endpoint was
selected, making Run All silently fail. Moved the selector above
all tabs so it's shared across Chat, Compare, and Test set.
streamPromptTest used fetch() with absolute URL (localhost:3000) from
localhost:5173, triggering CORS preflight that blocked the actual POST.
Changed to relative /api/... URL which goes through Vite's dev proxy,
avoiding CORS entirely. This is the same approach axios uses.
@gorkem-bwl gorkem-bwl marked this pull request as draft March 21, 2026 15:13
gorkem-bwl added a commit that referenced this pull request Apr 1, 2026
## Changes
- Replace 501 stub in /prompts/test with real SSE streaming via
  resolve_endpoint_for_key + stream_chat_completion
- Fix CORS: use relative URL (/api/ai-gateway/prompts/test) instead of
  absolute GATEWAY_API_URL for the prompt test fetch
- Add getOptionValue to Compare panel version and endpoint Select dropdowns
  so they are actually selectable

## Context
Reimplements the key fixes from PR #3578 on a clean branch from develop,
avoiding the 292-file diff from the stale base branch.
@gorkem-bwl gorkem-bwl added this to the 2.3 milestone Apr 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant