-
Notifications
You must be signed in to change notification settings - Fork 636
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can openllm support local path model? #1044
Comments
openllm 部署
4.创建venv 虚拟环境 5.激活venv虚拟环境 6.安装依赖项
src/bentofile.yaml 更新如下
bento_constants.py 更新如下 CONSTANT_YAML = ''' ''' bento.yaml 更新如下 service: service:VLLM
9.启动venv虚拟环境,运行命令 $ export BENTOML_HOME=/home/tcx/.openllm/repos/github.com/bentoml/openllm-models/main/bentoml 或者 10 如果端口被占用,执行如下命令 |
It seems that you have a solution step by step. Anything we can help? |
I still do not know how to load a Lora fine-tuning model or where to modify the yaml file. |
I don't think have loading lora supported yet, but we can add this @bojiang |
As for local path model, I think we can support it. |
thanks🌺 |
openllm-models service.py vllm_api_server.openai_serving_chat = OpenAIServingChat( both set lora_modules=None, |
🌼 |
Hi there, I think this involves a larger design discussion that we are currently working internally. Will update more once we have more details. Thanks for your patience |
已收到!
|
how can i use openllm for local lora model?
The text was updated successfully, but these errors were encountered: