Skip to content

Ollama integration #346

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

alexthotse
Copy link

No description provided.

@alexthotse
Copy link
Author

Ollama integration

This commit introduces a dedicated Ollama provider to the application. Previously, local models were handled by a generic `local` provider. This change adds a new `ollama` provider, making the integration more explicit and easier to maintain.

The following changes are included:

- A new `ollama.go` file in `internal/llm/provider` to define the `OllamaClient`.
- A new `ollama.go` file in `internal/llm/models` to define Ollama models.
- Updates to `provider.go` and `models.go` to register the new provider and its models.
- A new test file `internal/llm/provider/ollama_test.go` to verify the integration.
This commit introduces several new features and improvements to the codebase.

- Reordered the provider priority to prioritize Ollama, OpenRouter, and Gemini.
- Added a new `config` command to the CLI that allows you to set your API keys and other settings.
- Added support for Hugging Face, Replicate, and Cohere as new LLM providers.
- Added a testing framework for the providers, including a mock provider.
- Improved the error handling in the application by creating a new `errors.go` file with custom error types.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant