-
Notifications
You must be signed in to change notification settings - Fork 2.1k
toolCallStreaming
is not functioning as expected with Bedrock or Google models.
#5544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
same issue here - gemini tool call always returns in a batch and can timeout and simply aborts if tool call takes too long |
Hey, I am wondering have you managed to figure out tool streaming? Thanks in advance! |
Hi @DayuanJiang (your app look awesome btw!) and @techjason! Looks like for Gemini, this is happening at the provider level; they send back a single chunk where the tool args have been completely resolved. I'm digging through Gemini docs + discussions to see if this is intended behavior. For now: if it's crucial for your app to display the tool args streaming in, I would recommend sticking to |
I have yet to find explicit documentation that the args are always sent in a single chunk. It seems when streaming, the structured JSON object that's returned often returns as a single chunk; see this diagram + description here:
But as for the following:
@techjason can you further contextualize your issue? A code snippet would help to better understand what's happening (e.g. are you executing the tool client-side, or are you passing an |
@iteratetograceness Thank you for your investigation. I previously suspected the issue likely stemmed from the Provider's side. Additionally, I think it's best to correct the document, as it currently shows that all Google models are compatible with the https://sdk.vercel.ai/providers/ai-sdk-providers/google-generative-ai#model-capabilities ![]() |
Description
While
toolCallStreaming
works correctly withopenai("gpt-4o")
, when I switch togoogle("gemini-2.0-flash-001")
orbedrock('anthropic.claude-3-5-sonnet-20241022-v2:0')
,toolInvocation.args
does not stream. It appears to output only after all tokens have been gathered.However, regular messages stream without issue.
You can try at the demo site: https://next-ai-draw-io.vercel.app/
And here is the repo: https://github.com/DayuanJiang/next-ai-draw-io
Here is my route.ts’ code:
And here is client's code to render the tool info.
Code example
No response
AI provider
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: