You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am a hobbyist and was wondering if it would be possible to use this in a self-hosted offline setup.
Or at least substitute the openAI for a self-hosted LLM such as Llama
If this question has been asked before I am terribly sorry as I missed it when I was searching for it
The text was updated successfully, but these errors were encountered:
I've been testing the V1 early access for an idea I've been toying with in my head. After $50 on Pythagora V1 I pretty much got to where someone quoted me $8.5k so I can't complain about the cost, and I would be happy to contribute to help advance the project, but please allow me to use local models
Just reading the config file you can set an openai base url, so if you serve your model locally with an openai compatible api you can just set the base url to that model's url and it should work.
I'm just guessing though, not an expert, haven't used gpt-pilot before.
Version
VisualStudio Code extension
Suggestion
I am a hobbyist and was wondering if it would be possible to use this in a self-hosted offline setup.
Or at least substitute the openAI for a self-hosted LLM such as Llama
If this question has been asked before I am terribly sorry as I missed it when I was searching for it
The text was updated successfully, but these errors were encountered: