Open
Description
This is such an amazing package, but with some larger codebases that I work with the costs would just be too high to run this using cloud models. How difficult would it be to support locally running LLM models through the likes of LM Studio or Ollama? I know both provide OpenAI compatible APIs as well as a suite of other ways to interact with the locally running model. This feature would be killer and would set this out as a tool similar to Claude Code for local codebase analysis.
Metadata
Metadata
Assignees
Labels
No labels