Skip to content
Marcus Green edited this page Jun 9, 2025 · 29 revisions

Configure with ollama

See the Ollama FAQ for the full details on this.

https://github.com/ollama/ollama/blob/main/docs/faq.md

Ollama defaults to serving on port 11434. If you can configure your ollama to serve on port 80 the rest of this will not be necessary.

As admin user go to

admin/settings.php?section=httpsecurity

Remove this from cURL blocked hosts list

192.168.0.0/16

Assuming your ollama is not listening on port 80, add its port to cURL allowed ports list

e.g. 11434

Assuming you have put your Ollama server on a local machine with the dns set up as myollama, The Endpoint url will probably look something like

http://myollama:11434/v1/chat/completions

I have found that the mistral model responds well to being asked to only return json.

An interesting cloud service with open source models is available at https://console.groq.com Get a key and use this for the endpoint https://api.groq.com/openai/v1/chat/completions

See here for a list of supported models https://console.groq.com/docs/models

Clone this wiki locally