Skip to content

Add Ollama provider for self-hosted LLM inference#38

Merged
The-R4V3N merged 3 commits intomasterfrom
feat/ollama-provider
Mar 21, 2026
Merged

Add Ollama provider for self-hosted LLM inference#38
The-R4V3N merged 3 commits intomasterfrom
feat/ollama-provider

Conversation

@The-R4V3N
Copy link
Collaborator

Summary

  • Adds LLM_PROVIDER=ollama for fully local, zero-cost LLM inference via Ollama
  • Uses Ollama's OpenAI-compatible /v1/chat/completions endpoint — no SDK, just native fetch
  • No API key required — fully offline after model download
  • Configurable base URL via OLLAMA_BASE_URL (defaults to http://localhost:11434)
  • Default model: llama3.1:8b (user-configurable via LLM_MODEL)
  • 120s default timeout (longer than cloud providers for local hardware)

Changes

  • lib/llm/ollama.mjs — new provider following existing pattern
  • lib/llm/index.mjs — factory case + export
  • crucix.config.mjsOLLAMA_BASE_URL env var passthrough
  • .env.example — updated docs with ollama option
  • test/llm-ollama.test.mjs — 11 unit tests (defaults, request format, response parsing, errors, factory)
  • test/llm-ollama-integration.test.mjs — integration test (auto-skips if Ollama unavailable)

Closes #30

Test plan

  • Unit tests pass: node --test test/llm-ollama.test.mjs (11/11)
  • Integration test skips gracefully when Ollama is not running
  • With Ollama running: OLLAMA_MODEL=llama3.1:8b node --test test/llm-ollama-integration.test.mjs
  • Full sweep with LLM_PROVIDER=ollama generates trade ideas

Adds LLM_PROVIDER=ollama for fully local, zero-cost inference
via Ollama's OpenAI-compatible API. No API key required.
Configurable base URL via OLLAMA_BASE_URL env var.
@calesthio
Copy link
Owner

I haven't been able to get to this yet because work has been busy, but I definitely plan to review it over the weekend.

Include both Mistral and Ollama providers in factory,
config, and env docs.
calesthio
calesthio previously approved these changes Mar 21, 2026
Copy link
Owner

@calesthio calesthio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved. The Ollama provider wiring looks sound and the main regression checks passed on an isolated review worktree. One non-blocking nit: est/llm-ollama-integration.test.mjs currently only checks whether the Ollama server is reachable, so it hard-fails if the default model isn't installed locally. It would be better to skip unless the requested model is actually present in /api/tags, since the PR description says the integration test should auto-skip when the local setup isn't ready.

Check /api/tags for the requested model before running, instead of
only checking server reachability. Provides a descriptive skip reason
listing available models.
@The-R4V3N
Copy link
Collaborator Author

Thanks for the review! Good catch on the integration test — I've pushed a fix (ca8f76c) that now checks /api/tags for the requested model before running, not just server reachability.
If the model isn't installed it'll skip with a descriptive message listing available models instead of hard-failing.

@calesthio calesthio self-requested a review March 21, 2026 19:46
Copy link
Owner

@calesthio calesthio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm

@The-R4V3N The-R4V3N merged commit a081cda into master Mar 21, 2026
1 check passed
@The-R4V3N The-R4V3N deleted the feat/ollama-provider branch March 21, 2026 20:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Add Ollama Provider Support for Selft-Hosted Options

2 participants