Skip to content

Gracefully present AI service failures #116

@felipemontoya

Description

@felipemontoya

Every workflow (or almost every) will depend on an LLM service call. Sometimes this will fail due to unforseen circumstances.

  • the key is not valid
  • the key has reached a token limit
  • the LLM service is down
  • the context window was too large and the LLM inference broke

When something like that happens, we want to make sure that we handle it.

  • in the backend, properly log the error with the message returned from the LLM service
  • in the frontend, properly explain to the user that the issue was without leaking info should the LLM message have it. (best to have a few prepared messages and just say that)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    Status

    Planning

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions