-
Notifications
You must be signed in to change notification settings - Fork 59.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
如何使用v2.13.0的自定义模型功能(how to use 2.13.0 multi models) #5001
Comments
Please follow the issue template to update title and description of your issue. |
Title: How to use the custom model function of v2.13.0 (how to use 2.13.0 multi models)
|
另外想问一下如何支持多个相同的自定义部署呢 比如Azure #4398 |
Also, I would like to ask how to support multiple custom deployments #4398 |
docker compose部署发现按描述配置:+gpt-3.5-turbo@azure=gpt-3.5,无法正常生效,没有配置任何openai的参数,但默认走openai导致返回错误 |
The docker compose deployment found that it was configured as described: +gpt-3.5-turbo@azure=gpt-3.5, which could not take effect normally. No openai parameters were configured, but openai was used by default. |
自定义模型的url,在env中是配置哪个变量? |
Which variable is configured in env for the custom model url? |
问答时报错:Unknown parameter: 'path' 请求one-api的参数: |
Error during Q&A: Unknown parameter: 'path' Parameters for requesting one-api: |
谢谢,解决了第三方api的问题 |
Thank you, the problem with the third-party API has been solved |
弱弱地问下,插件里面的Artifacts是干嘛的,谷歌、百度了一遍,还是没看明白。。。 |
This comment was marked as duplicate.
This comment was marked as duplicate.
适用Claude模型的一个插件,具体可以看一下官方的说明 |
我在使用该项目调用自己微调的大模型接口时,输出的token被限制在了100,请问 我应该如何修改。 |
When I use this project to call the large model interface I fine-tuned, the output token is limited to 100. How should I modify it? |
@lloydzhou v2.15.5版本尝试+claude-3-5-sonnet-20240620@OpenAI,+claude-3-haiku-20240307@OpenAI ,在OpenAI渠道下没看到相关模型选项 |
@lloydzhou v2.15.5 version tried +claude-3-5-sonnet-20240620@OpenAI, +claude-3-haiku-20240307@OpenAI, but did not see the relevant model options under the OpenAI channel |
CUSTOM_MODELS
配置使用:The text was updated successfully, but these errors were encountered: