The project looks neat, but I can't see a way to configure local LLM servers such as Ollama. Is there somewhere you can set an OpenAI compatible API endpoint?