Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Azure Open AI Assistant API Streaming Suddenly Returning 400 Errors #48502

Open
noelstieglitz opened this issue Mar 1, 2025 · 2 comments
Labels
Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team OpenAI question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Service Attention Workflow: This issue is responsible by Azure service team.

Comments

@noelstieglitz
Copy link

noelstieglitz commented Mar 1, 2025

Library name and version

Azure.AI.OpenAI 2.2.0-beta.2

Describe the bug

On 2/26 around 8:22 am, the Azure Open AI Assistant API stopped allowing streaming responses from an Azure deployed Open AI gpt-4o model. Here is the error:

HTTP 400 (invalid_request_error: unsupported_model) Parameter: model Unsupported value: 'stream' does not support 'true' with this model. Supported values are: 'false'.

Model Deployment details

  • Model: gpt-4o
  • Model name: oia-oiaoiaoia-dev-eastus2-001-global-standard-gpt4o (NOTE: this seems important to replicate the issue)
  • Model version: 2024-05-13 (though others as well)
  • Region: eastus2
  • Deployment Type: Global Standard

The deployed model was previously working for quite some time. I tried deploying a new gpt-4o model and had the same issue. No code or packages have changed our end that could explain the breakage. We did try upgrading the nuget package (from Azure.AI.OpenAI 2.2.0-beta.1 to Azure.AI.OpenAI 2.2.0-beta.2) to see if that would resolve the issue. It did not.

I was able to work around this issue by deploying a gpt-4o-mini, but obviously that has other implications. I tried some of the newly available o models as well, which as far as I can tell should support streaming. Those had the same issue as gpt-4o. Here's an example request I captured using a proxy:

POST /openai/threads/[redacted]/runs?api-version=2025-01-01-preview HTTP/1.1
Host [redacted].openai.azure.com
OpenAI-Beta	assistants=v2
Accept	application/json
User-Agent	azsdk-net-AI.OpenAI/2.2.0-beta.2 (.NET 9.0.2; Microsoft Windows 10.0.26100)
x-ms-client-request-id	[redacted]
api-key	[redacted]
Request-Context	[redacted]
Request-Id	[redacted]
traceparent	[redacted]
Content-Type	application/json
Content-Length	87

Response:

{
  "error": {
    "message": "Unsupported value: 'stream' does not support 'true' with this model. Supported values are: 'false'.",
    "type": "invalid_request_error",
    "param": "model",
    "code": "unsupported_model"
  }
}

I can share a curl request that consistently reproduces the issue as well as the redacted bits above with Azure support personnel.

I have since been able to deploy gpt-4o models that are streaming successfully. It might be a model name length issue but have seen some newly deployed models (same name length/pattern, deployment type, model version, model name, and content filter) have the issue while another did not. Even looking at the network traffic, the requests are almost identical between the working model and not working model.

Edit: This does appear to be related to the name of the model. Creating a brand new gpt-4o model with:

  • Model name gpt4o-ncs - can stream
  • Model name bai-baibaibai-dev-eastus2-001-global-standard-gpt4o - can stream
  • Model name oia-oiaoiaoia-dev-eastus2-001-global-standard-gpt4o - can NOT stream
  • Model name oai-oaioaioai-dev-eastus2-001-global-standard-gpt4o - can NOT stream

I tried creating a support ticket via Azure but was unsuccessful.

Expected behavior

Streaming is still supported using the Assistant API using an Azure deployed GPT-4o model.

Actual behavior

The following error is returned, along with an HTTP 400

{
  "error": {
    "message": "Unsupported value: 'stream' does not support 'true' with this model. Supported values are: 'false'.",
    "type": "invalid_request_error",
    "param": "model",
    "code": "unsupported_model"
  }
}

Reproduction Steps

  1. Create the following model:

Issue the following curl request (again, happy to share the redacted bits):

curl -H "Host:[redacted].openai.azure.com" -H "OpenAI-Beta: assistants=v2" -H "Accept: application/json" -H "User-Agent: azsdk-net-AI.OpenAI/2.2.0-beta.2 (.NET 9.0.2; Microsoft Windows 10.0.26100)" -H "x-ms-client-request-id: [redacted]" -H "api-key: [redacted]" -H "Request-Context: appId=[redacted]" -H "Request-Id: [redacted]" -H "traceparent: [redacted]" -H "Content-Type: application/json" --data-binary "{\"assistant_id\":\"[redacted]\",\"stream\":true,\"additional_messages\":[]}" "https://[redacted].openai.azure.com/openai/threads/[redacted]/runs?api-version=2025-01-01-preview"

Environment

Model Deployment details

  • Model: gpt-4o

  • Model version: 2024-05-13 (though others as well)

  • Region: eastus2

  • Deployment Type: Global Standard

@github-actions github-actions bot added Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team OpenAI question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Service Attention Workflow: This issue is responsible by Azure service team. labels Mar 1, 2025
Copy link

github-actions bot commented Mar 1, 2025

Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @jpalvarezl @ralph-msft @trrwilson.

@noelstieglitz
Copy link
Author

For anyone else who is having this issue, according to Microsoft support, this is a known bug for models starting with the letter "o":

I wanted to let you know that this is a known issue where deployment names starting with the character "o" are causing 400 errors, whereas those starting with other characters do not have this issue. I kindly request you to create a new deployment with a name not starting with the character "o".

😲

I'll close this issue once I hear back from support and confirm the problem has been addressed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team OpenAI question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Service Attention Workflow: This issue is responsible by Azure service team.
Projects
None yet
Development

No branches or pull requests

1 participant