-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Description
Bug Description
Hi, thanks for the great project!
I'm trying to connect a local Ollama model to OpenMAIC using the OpenAI-compatible API, but generation always fails with "LLM returned empty response", even though the Ollama endpoint itself works correctly.
Environment
- OS: Windows 11
- OpenMAIC: latest version, running via npm / pnpm dev server on http://localhost:3000 (not Docker)
- Browser: Microsoft Edge (latest on Windows 11)
- LLM provider: Ollama OpenAI-compatible endpoint
- Local model: glm-4.7-flash from the Ollama library
Ollama setup
Ollama is running locally and the OpenAI-compatible endpoint is enabled.
GET http://localhost:11434/v1/modelsreturns the model list includingglm-4.7-flash.- Example
chat/completionscall from PowerShell:
Invoke-WebRequest -Uri "http://localhost:11434/v1/chat/completions" `
-Method POST `
-UseBasicParsing `
-Headers @{ "Content-Type" = "application/json"; "Authorization" = "Bearer sk-test" } `
-Body '{
"model": "glm-4.7-flash",
"messages": [
{ "role": "user", "content": "Hello, please introduce yourself briefly." }
],
"stream": false
}'This returns HTTP 200 with JSON like:
{
"id": "chatcmpl-471",
"object": "chat.completion",
"created": 1774427688,
"model": "glm-4.7-flash",
"system_fingerprint": "fp_ollama",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "..."
},
"finish_reason": "stop"
}
]
}So the OpenAI-compatible endpoint seems to behave as expected.
OpenMAIC configuration
In Settings → 语言模型 → ollama:
- Protocol: OpenAI 协议
- Base URL:
http://localhost:11434/v1 - API key: a dummy string like
sk-123456...(Ollama does not validate it, but the UI requires an API key) - "需要 API 密钥" is checked
- Model:
glm-4.7-flash
The request URL shown in the UI is: http://localhost:11434/v1/chat/completions
Steps to Reproduce
- Install and run Ollama locally on Windows 11 with the
glm-4.7-flashmodel pulled. - Verify the Ollama OpenAI-compatible endpoint works:
GET http://localhost:11434/v1/modelsreturns the model, andPOST http://localhost:11434/v1/chat/completionsreturns a valid response (HTTP 200, non-emptychoices[0].message.content). - Run OpenMAIC via
pnpm dev(latest version), accessible athttp://localhost:3000. - In OpenMAIC Settings → 语言模型 → ollama, configure:
- Protocol: OpenAI 协议
- Base URL:
http://localhost:11434/v1 - API key: any dummy string (e.g.,
sk-123456) - Model:
glm-4.7-flash
- Click "测试连接" (Test Connection) in the settings panel.
- Alternatively, try to generate content or preview a lesson using the configured Ollama provider.
- Observe: the UI shows "生成失败 / LLM returned empty response", and the DevTools Network panel shows
GET /api/verify-modelreturning 500 Internal Server Error.
Expected Behavior
OpenMAIC should be able to use a standard OpenAI-style chat/completions response from a local OpenAI-compatible backend (Ollama) and generate content normally. At a minimum, the /api/verify-model endpoint should not return 500 when the underlying Ollama endpoint is reachable and returns a valid OpenAI-format JSON response with a non-empty choices[0].message.content.
Actual Behavior
/api/verify-model returns 500 Internal Server Error when using the Ollama provider configured with http://localhost:11434/v1.
The frontend shows "生成失败 / LLM returned empty response" and generation/preview always fails, even though the underlying Ollama endpoint returns a valid OpenAI-style JSON response with non-empty choices[0].message.content.
Browser console errors observed:
api/verify-model:1 Failed to load resource: the server responded with a status of 500 (Internal Server Error)
chunk-XMRXO6YU.js:3241 Uncaught (in promise) Error: No elements found
This happens both with Chinese prompts and with simple English prompts like "Say hi in English". Direct calls to Ollama's /v1/chat/completions (via PowerShell) succeed and return valid responses.
Deployment Method
Local development (npm run dev / pnpm dev / yarn dev)
Browser
Microsoft Edge (latest on Windows 11)
Operating System
Windows 11
Relevant Logs / Screenshots
Browser DevTools Network panel:
- GET /api/verify-model → 500 Internal Server Error
Browser Console:
api/verify-model:1 Failed to load resource: the server responded with a status of 500 (Internal Server Error)
chunk-XMRXO6YU.js:3241 Uncaught (in promise) Error: No elements found
UI error message: 生成失败 / LLM returned empty responseAdditional Context
Questions
- Is there any additional configuration required to use an OpenAI-compatible local backend (like Ollama) with OpenMAIC?
- Could you point me to where
/api/verify-modelvalidates the response, or what JSON schema it expects, so I can help provide more debugging info or even submit a PR? - If Ollama is not yet officially supported, is there a recommended way to disable
verify-modelfor custom providers and just pass through responses?
Thanks in advance!