Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chunk level timeout on streamText #5443

Open
NeroBlackstone opened this issue Mar 29, 2025 · 4 comments
Open

Chunk level timeout on streamText #5443

NeroBlackstone opened this issue Mar 29, 2025 · 4 comments
Labels
ai/core enhancement New feature or request

Comments

@NeroBlackstone
Copy link

Feature Description

Although streamText provides abortSignal option, this is for the entire http request. Can we implement a chuck level timeout (which is obviously more practical)?

I guess this can be achieved with onChunk, but it would be better if the ai sdk had this feature built in

Use Cases

No response

Additional context

No response

@NeroBlackstone NeroBlackstone added the enhancement New feature or request label Mar 29, 2025
@lgrammel lgrammel changed the title Chuck level timeout on streamText Chunk level timeout on streamText Mar 29, 2025
@lgrammel
Copy link
Collaborator

Can you describe how the chunk timeout should behave? When should it reset, when should it time out?

@NeroBlackstone
Copy link
Author

Can you describe how the chunk timeout should behave? When should it reset, when should it time out?

Thank you for reply.

As we know, Server-Sent Events of OpenAI compatible api will return data streaming:

data: {"id":"20250329162729bae32349198643da","choices":[{"index":0,"delta":{"content":"]}"}}],"created":1743236852,"model":"glm-zero-preview","object":"chat.completion.chunk"}

data: {"id":"20250329162729bae32349198643da","choices":[{"index":0,"delta":{"content":"","role":"assistant"}}],"created":1743236852,"model":"glm-zero-preview","object":"chat.completion.chunk"}

data: {"id":"20250329162729bae32349198643da","choices":[{"finish_reason":"stop","index":0,"delta":{}}],"created":1743236852,"model":"glm-zero-preview","object":"chat.completion.chunk"}

..................many data chunks......................

data: [DONE]

I hope to terminate the request when the waiting time between chunks exceeds a certain threshold.

When should it reset

Maybe we can set a timer for the chunk, and when the chunk is received, reset the timer; when timeout, terminates the request.

@NeroBlackstone
Copy link
Author

My solution:

  const chunkTimeoutAbortController = new AbortController();
  let inactivityTimerId: NodeJS.Timeout | null = null;

  const resetInactivityTimer = () => {
    if (inactivityTimerId) {
      clearTimeout(inactivityTimerId);
    }
    inactivityTimerId = setTimeout(() => {
      chunkTimeoutAbortController.abort();
    }, CHUNK_TIMEOUT_IN_MS);
  };

const result = streamText({
  onChunk: () => {
      resetInactivityTimer();
    },
    onError: ({ error }) => {
      if (inactivityTimerId) {
        clearTimeout(inactivityTimerId);
      }
      console.error(LogFormat.error('StreamText error:' + error));
    },
abortSignal: chunkTimeoutAbortController.signal, 
onFinish: async ({ response }) => {
      if (inactivityTimerId) {
        clearTimeout(inactivityTimerId);
      }
},
})

resetInactivityTimer();

@lgrammel
Copy link
Collaborator

I'll consider this when we rework timeouts and fallback providers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ai/core enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants