Replies: 1 comment
-
Yeah I need to look into this, this is very much needed. I think a way to do is is to have some way to specify the custom providers in the config, currently opencode also uses model config to know the cost per token and stuff like that but I suppose we could allow users to configure that also. Thanks for the codecompanion.nvim example. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The company I work for provides access to a range of models — hosted with Azure OpenAI, AWS Bedrock, etc. — through a "proxy" API.
The main purpose of this proxy is to handle authentication in a universal way. We get a single API key which we are expected to supply in an
api-key: <key>
header, they handle the rest.They also provide something they call a "universal" API: This is a single API that adheres to the OpenAI API spec through which models from all providers can be used. (So both the OpenAI and the Anthropic claude models etc.)
I suspect I'm not the only one with weird requirements for how and where their models are hosted. Perhaps a 3rd party / custom provider abstraction could be worth looking into?
A look at CodeCompanion.nvim for inspiration
For inspiration, perhaps take a look at CodeCompanion.nvim. Their adapter abstraction allows you to customize an existing adapter through configuration or create a new adapter altogether. I don't think their adapter abstraction is perfect, but it might be a cool source of inspiration.
Of course this isn't completely fair... Since CodeCompanion.nvim is a neovim plugin they enjoy the luxury and freedom of Lua based configuration. But still.
I was able to extend their "OpenAI compatible" adapter with some basic connection info, a list of models that we support, and some options to get things fully working. (See the code.)
Beta Was this translation helpful? Give feedback.
All reactions