Skip to content

openrouter throws BadRequestError #171

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
johappel opened this issue Apr 14, 2025 · 5 comments · May be fixed by #486
Open

openrouter throws BadRequestError #171

johappel opened this issue Apr 14, 2025 · 5 comments · May be fixed by #486
Assignees
Labels
models Issues about model support

Comments

@johappel
Copy link

** Please make sure you read the contribution guide and file the issues in the rigth place. **
Contribution guide.

Describe the bug
A clear and concise description of what the bug is.
When I use the adk agent with a LiteLlm model via openrouter I get an error

model=LiteLlm(model="openrouter/openai/gpt-4o-mini")

Error: litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenrouterException - Provider returned error

The same model via openai does not throw any errors
model=LiteLlm(model="openai/gpt-4o-mini")

To Reproduce
Steps to reproduce the behavior:

  1. Install 'pip install google-adk'
  2. Run
import datetime
from zoneinfo import ZoneInfo
from google.adk.agents import Agent
from google.adk.models.lite_llm import LiteLlm

def get_weather(city: str) -> dict:
    """Retrieves the current weather report for a specified city.

    Args:
        city (str): The name of the city for which to retrieve the weather report.

    Returns:
        dict: status and result or error msg.
    """
    if city.lower() == "new york":
        return {
            "status": "success",
            "report": (
                "The weather in New York is sunny with a temperature of 25 degrees"
                " Celsius (41 degrees Fahrenheit)."
            ),
        }
    else:
        return {
            "status": "error",
            "error_message": f"Weather information for '{city}' is not available.",
        }


def get_current_time(city: str) -> dict:
    """Returns the current time in a specified city.

    Args:
        city (str): The name of the city for which to retrieve the current time.

    Returns:
        dict: status and result or error msg.
    """

    if city.lower() == "new york":
        tz_identifier = "America/New_York"
    else:
        return {
            "status": "error",
            "error_message": (
                f"Sorry, I don't have timezone information for {city}."
            ),
        }

    tz = ZoneInfo(tz_identifier)
    now = datetime.datetime.now(tz)
    report = (
        f'The current time in {city} is {now.strftime("%Y-%m-%d %H:%M:%S %Z%z")}'
    )
    return {"status": "success", "report": report}


root_agent = Agent(
    name="weather_time_agent",
    model=LiteLlm(model="openrouter/openai/gpt-4o-mini"),
    description=(
        "Agent to answer questions about the time and weather in a city.  If a city"
        " is not provided, ask for clarification."
    ),
    instruction=(
        "You are a helpful agent. Your task is to answer user questions about"
        " time or weather using the provided tools. "
        "1. Identify the required tool (get_weather or get_current_time) and the city."
        "2. Call the identified tool **exactly once**."
        "3. After receiving the response from the tool, **immediately** formulate a final answer for the user based **only** on the tool's response."
        "4. **Do not call any tool more than once per user request.**"
        "5. If the user input is unclear or a tool returns an error, inform the user."
    ),
    tools=[get_weather, get_current_time],
)
  1. Open 'adk web'
  2. See error when ask the "weather in New York"

Expected behavior
The behavior of model=LiteLlm(model=“openai/gpt-4o-mini”) and model=LiteLlm(model=“openrouter/openai/gpt-4o-mini”) should be identical

Desktop (please complete the following information):

  • OS: [e.g. iOS] Windws
  • Python version(python -V): 3.13.2
  • ADK version(pip show google-adk): 0.1.0
@hangfei
Copy link
Collaborator

hangfei commented Apr 15, 2025

@selcukgun to take a look if this can be supported.

@namish800
Copy link

namish800 commented Apr 16, 2025

Hey @hangfei @selcukgun , I am also facing the same issue. I would love to contribute. Please let me know where I can start

@hangfei
Copy link
Collaborator

hangfei commented Apr 24, 2025

Thanks. @namish800 could you start with researching what's the issue first and suggest a fix?

@PaveLuchkov
Copy link

PaveLuchkov commented Apr 30, 2025

I have the same issue, I can add that when using only one tool openrouter can handle it but struggles with multiple.
UPD: No, the problem not with the count of tools.
I've tried tools from calendar_tool_set: calendar_events_list and calendar_events_insert. Openrouter works only with calendar_events_list somewhy.

@PaveLuchkov
Copy link

I'm adding my findings to this issue as I've encountered the exact same problem when using the calendar_events_insert tool from the Google Calendar toolset (google.calendar_tools.calendar_tool_set) with Google ADK.

My Setup:
LLM Provider: OpenRouter
Model: google/gemini-2.0-flash-001
Tool Failing: calendar_events_insert
Tool Working: calendar_events_list (for comparison)

After enabling debug logging in LiteLLM (litellm._turn_on_debug()), I captured the request sent to OpenRouter and the subsequent error response originating from the Google AI backend (Gemini).

The request payload clearly shows the tools definition for calendar_events_insert. However, the error returned by Google indicates an INVALID_ARGUMENT (code 400) due to issues within the function declaration's parameter schema:

"message": "* GenerateContentRequest.tools[0].function_declarations[0].parameters.properties[reminders].properties[overrides].items.properties: only allowed for OBJECT type\n* ... (many similar lines related to '.properties') ...\n* GenerateContentRequest.tools[0].function_declarations[0].parameters.properties[conference_data].properties[entryPoints].items.properties[entryPointFeatures].items: field predicate failed: $type == Type.ARRAY\n"
Use code with caution.
Upon inspecting the generated schema within the request's tools array (as shown in the LiteLLM debug logs), it's evident that data types are being represented using Python Enum string formats (e.g., <Type.OBJECT: 'OBJECT'>, <Type.STRING: 'STRING'>, <Type.ARRAY: 'ARRAY'>) instead of the standard JSON Schema string literals (e.g., "object", "string", "array")

This appears to be an issue within ADK's schema generation/serialization logic specifically for function/tool definitions intended

PaveLuchkov added a commit to PaveLuchkov/adk-python that referenced this issue May 1, 2025
Fixes conversion of Gemini Type enums within the recursive _schema_to_dict
function to ensure valid JSON Schema types ('string', 'object', etc.)
are generated, resolving errors with complex tools like Google Calendar.

Fixes google#171
@boyangsvl boyangsvl added the models Issues about model support label May 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
models Issues about model support
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants