Skip to content

Conversation

@teto
Copy link
Contributor

@teto teto commented Jan 26, 2026

Experimenting with AI, my environment gets messy fast and it's not always easy to know what model my software is trying to load. This helps with troubleshooting.

Before:

> Who are you ?


Error: {
  code = 400,
  message = "model not found",
  type = "invalid_request_error"
}

after

> Who are you ?


Error: {
  code = 400,
  message = "model 'toto' is not found",
  type = "invalid_request_error"
}

NB: I couldn't find a target to run linting (usually make lint / or make format) so tried to run the CI locally as explained in contributing.md with

GG_BUILD_CUDA=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt
Warning: Using fallback CUDA architectures
./ci/run.sh: ligne 651: python3 : commande introuvable

I am developing from within the repo flake.nix so I expected all devtools present (I noticed the flake didn't provide ccache either, not sure if that's a conscious decision or not). I can submit a patch to add python3.

Make sure to read the contributing guidelines before submitting a PR

@isaac-mcfadyen
Copy link
Contributor

Minor grammar nit: is should probably be was here, i.e. model 'test' was not found 🙂

@teto
Copy link
Contributor Author

teto commented Jan 27, 2026

I might have copied it from

void server_models::load(const std::string & name) {
    if (!has_model(name)) {
        throw std::runtime_error("model name=" + name + " is not found");
    }

shall i correct that one too ?

@isaac-mcfadyen
Copy link
Contributor

Oh interesting, I didn't realize that 😅

I'm not actually a maintainer of the server so I can't give final say, but we might as well? I can't imagine there's a reason not to unless someone is depending on the error messages being the same.

Experimenting with AI, my environment gets messy fast and it's not
always easy to know what model my software is trying to load. This helps
with troubleshooting.

before:

Error: {
  code = 400,
  message = "model not found",
  type = "invalid_request_error"
}

After:

Error: {
  code = 400,
  message = "model 'toto' not found",
  type = "invalid_request_error"
}
@teto teto force-pushed the teto/print-model-name-when-not-found branch from 4e5b13a to 752c3a2 Compare January 27, 2026 15:10
@teto
Copy link
Contributor Author

teto commented Jan 27, 2026

actually there are many model=<MODEL> is not found so I prefer to just keep the current format, just inserting the name model inbetween

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants