Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: handle OpenAI API errors better #1291

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

akx
Copy link

@akx akx commented Feb 19, 2025

I wanted to try Goose out, so I built it from source and tried configuring it with my local Ollama, only to be met with "Request failed with status: 404 Not Found". While I, knowing Ollama, could guess that meant "that model is not found", not everyone might.

This PR:

  • gently cleans up error handling from OpenAI compatible endpoints by introducing a real type for them
  • makes that error-handling branch handle 404 as well as 400

with the end result that the "model "qwen2.5" not found, try pulling it first" error is now surfaced.

Before

$ goose configure
Welcome to goose! Let's get you set up with a provider.
  you can rerun this command later to update your configuration

┌   goose-configure
│
◇  Which model provider should we use?
│  Ollama
│
◇  Provider Ollama requires OLLAMA_HOST, please enter a value
│  localhost
│
◇  Enter a model from that provider:
│  qwen2.5
│
◇  Request failed: Request failed with status: 404 Not Found
│
└  Failed to configure provider: init chat completion request with tool did not succeed.

  Warning: We did not save your config, inspect your credentials
   and run 'goose configure' again to ensure goose can connect

After

$ goose configure
Welcome to goose! Let's get you set up with a provider.
  you can rerun this command later to update your configuration

┌   goose-configure
│
◇  Which model provider should we use?
│  Ollama
│
◇  Provider Ollama requires OLLAMA_HOST, please enter a value
│  localhost
│
◇  Enter a model from that provider:
│  qwen2.5
│
◇  Request failed: model "qwen2.5" not found, try pulling it first (type: api_error) (status 404)
│
└  Failed to configure provider: init chat completion request with tool did not succeed.

  Warning: We did not save your config, inspect your credentials
   and run 'goose configure' again to ensure goose can connect

@akx akx changed the title Handle (Ollama) errors better fix: handle OpenAI API errors better Feb 19, 2025
@akx akx force-pushed the handle-ollama-errors branch from dfa4502 to d46ced8 Compare February 20, 2025 10:15
@akx
Copy link
Author

akx commented Feb 20, 2025

Rebased post #1293 merge (which touched the same things). @yingjiehe-xyz, since you authored that one, mind reviewing this (or at least allowing CI on it to begin with)? Thanks! :)

Copy link
Collaborator

@yingjiehe-xyz yingjiehe-xyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution! some minor comments

@akx akx force-pushed the handle-ollama-errors branch from d46ced8 to 975f7d7 Compare February 20, 2025 17:30
@akx akx requested a review from yingjiehe-xyz February 20, 2025 17:30
@akx
Copy link
Author

akx commented Feb 20, 2025

@yingjiehe-xyz Rebased and took care of your suggestions :)

@akx akx force-pushed the handle-ollama-errors branch from 975f7d7 to 2c05253 Compare February 20, 2025 17:34
@akx
Copy link
Author

akx commented Feb 20, 2025

Fixed rustfmt 🤦

@yingjiehe-xyz
Copy link
Collaborator

We have a fix for the lint: #1314 (review)

@yingjiehe-xyz
Copy link
Collaborator

PR #1314 (review) is merged, can you do a quick rebase?

@akx akx force-pushed the handle-ollama-errors branch from 2c05253 to 7ede4c8 Compare February 20, 2025 18:23
@akx
Copy link
Author

akx commented Feb 20, 2025

@yingjiehe-xyz Done deal!

use std::io::Read;
use std::path::Path;

use crate::providers::errors::ProviderError;
use mcp_core::content::ImageContent;

#[derive(serde::Deserialize)]
struct OpenAIError {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, sorry, just realized we had errors.rs file under the same dir, can we move these to errors.rs file instead of utils.rs?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absolutely. Moved! I kept the response type where it was, because it's related to response unwrapping alone and shouldn't be needed by other modules at this point :)

@akx akx force-pushed the handle-ollama-errors branch from 7ede4c8 to e74555d Compare February 20, 2025 18:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants