Skip to content

Models not loading #6

@cmdull

Description

@cmdull

(.venv) PS C:\Users\user\VS Code Projects\test_AI> streamlit run app.py

Local URL: http://localhost:8501
Network URL: http://x.x.x.x:8501

llama_model_load: loading model from 'C:/Users/user/AppData/Local/nomic.ai/GPT4All/ggml-model-gpt4all-falcon-q4_0.bin' - please wait ...
llama_model_load: invalid model file 'C:/Users/user/AppData/Local/nomic.ai/GPT4All/ggml-model-gpt4all-falcon-q4_0.bin' (unsupported format version 3, expected 1)
llama_init_from_file: failed to load model

This is happening (or something similar) on all of the models that I've downloaded through the GPT4All gui.
My environment:

  • Windows 10
  • Python 3.10.10 (in VS Code)

After some searching, I've found some mentions of using a script to convert the models (convert-unversioned-ggml-to-ggml.py)
I haven't been successful getting this script to work. Are there any other routes to take to move forward?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions