You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@hksk you probably connected to a wrong server. the settings in your continue vscode extension should be configured to use the continue server (which starts when your continue vscode extension starts), not ggml-server (llama_cpp.server). something like this:
continue vscose extension -> continue server (0.0.0.0:65432) -> ggml-server (localhost:8000)
(assume the continue server and ggml-server are in the same machine, if not, ggml-server need to start with 0.0.0.0:8000)
@longyee What you've said is correct, and @hksk sorry for not seeing this earlier. We had just changed up the config file format at the time, but you can now open up ~/.continue/config.py by typing the slash command '/config'. And then the updated instructions should all be available at https://continue.dev/docs/customization
Hi guys, I just trying to run this and I got few errors:
btw im running the model:
python3 -m llama_cpp.server --model models/wizardLM-7B.ggmlv3.q4_0.bin --host 0.0.0.0
because the server are running in other computer.
thanks for your project, seems great!
The text was updated successfully, but these errors were encountered: