Skip to content

[BUG] Connect local LLM - Ollama gemma:2b #290

@jakhangir-esanov

Description

@jakhangir-esanov

What would you like to share?

How can i connect local llm. If i try to set locla model(ollama gemma:2b), "ollama is not supported" came out. We don't work with chpt or other llm depending security. So, we need to work with local llm.

Additional information

No response

Metadata

Metadata

Assignees

Labels

LLMLarge Language ModelenhancementNew feature or requestgood first issueGood for newcomershacktoberfestParticipation in the Hacktoberfest eventhelp wantedExtra attention is needed✨ featureNew feature requests or implementations🐛 bugIssues related to bugs or errors📝 documentationTasks related to writing or updating documentation📦 dependenciesDependencies🕓 medium effortA task that can be completed in a few hours🚀 performancePerformance optimizations or regressions🚨 securitySecurity-related issues or improvements🧠 backlogItems that are in the backlog for future work🧪 testsTasks related to testing

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions