Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions docs/user-guide/concepts/model-providers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@ The following table shows all model providers supported by Strands Agents SDK an
| [Custom Providers](custom_model_provider.md) | ✅ | ✅ |
| [Amazon Bedrock](amazon-bedrock.md) | ✅ | ✅ |
| [Amazon Nova](amazon-nova.md) | ✅ | ❌ |
| [OpenAI](openai.md) | ✅ | ✅ |
| [OpenAI (Chat Completions)](openai.md) | ✅ | ✅ |
| [OpenAI (Responses API)](openai-responses.md) | ✅ | ❌ |
| [Anthropic](anthropic.md) | ✅ | ❌ |
| [Gemini](gemini.md) | ✅ | ✅ |
| [LiteLLM](litellm.md) | ✅ | ❌ |
Expand Down Expand Up @@ -98,7 +99,8 @@ Each provider follows a similar pattern for initialization and usage. Models are
### Explore Model Providers

- **[Amazon Bedrock](amazon-bedrock.md)** - Default provider with wide model selection, enterprise features, and full Python/TypeScript support
- **[OpenAI](openai.md)** - GPT models with streaming support
- **[OpenAI (Chat Completions)](openai.md)** - GPT models via the Chat Completions API
- **[OpenAI (Responses API)](openai-responses.md)** - GPT models via the newer Responses API
- **[Gemini](gemini.md)** - Google's Gemini models with tool calling support
- **[Custom Providers](custom_model_provider.md)** - Build your own model integration
- **[Anthropic](anthropic.md)** - Direct Claude API access (Python only)
Expand Down
122 changes: 122 additions & 0 deletions docs/user-guide/concepts/model-providers/openai-responses.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
# OpenAI Responses API

!!! info "Language Support"
This provider is only supported in Python.

The [OpenAI Responses API](https://platform.openai.com/docs/api-reference/responses/create) is OpenAI's latest API for interacting with their models. The Strands Agents SDK provides a dedicated `OpenAIResponsesModel` provider that uses this API, supporting streaming, tool calling, and structured output.

!!! note "Looking for the Chat Completions API?"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a customer, how do I choose between the two? Which one is recommended/deprecated?

Assume someone who wants to develop agent with some openai model. which one should they choose?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the python sdk, I think the obvious path should be the chat api, since thats what the expectation is today. Moving forward to typescript, we might want to flip that to be responses by default. Power users should be able to investigate this and switch between the two api's though

If you want to use OpenAI's Chat Completions API instead, see the [OpenAI (Chat Completions)](openai.md) provider.

## Installation

OpenAI is configured as an optional dependency in Strands Agents. The Responses API provider requires the OpenAI Python SDK **v2.0.0 or later**. To install, run:

```bash
pip install 'strands-agents[openai]' strands-agents-tools
```

## Usage

After installing dependencies, you can import and initialize the OpenAI Responses API provider as follows:

```python
from strands import Agent
from strands.models.openai_responses import OpenAIResponsesModel
from strands_tools import calculator

model = OpenAIResponsesModel(
client_args={
"api_key": "<KEY>",
},
model_id="gpt-4o",
params={
"max_output_tokens": 1000,
"temperature": 0.7,
}
)

agent = Agent(model=model, tools=[calculator])
response = agent("What is 2+2")
print(response)
```

To connect to a custom OpenAI-compatible server:

```python
model = OpenAIResponsesModel(
client_args={
"api_key": "<KEY>",
"base_url": "<URL>",
},
...
)
```

## Configuration

### Client Configuration

The `client_args` configure the underlying `AsyncOpenAI` client. For a complete list of available arguments, please refer to the OpenAI [source](https://github.com/openai/openai-python).

### Model Configuration

The model configuration sets parameters for inference:

| Parameter | Description | Example | Options |
|-----------|-------------|---------|---------|
| `model_id` | ID of a model to use | `gpt-4o` | [reference](https://platform.openai.com/docs/models) |
| `params` | Model specific parameters | `{"max_output_tokens": 1000, "temperature": 0.7}` | [reference](https://platform.openai.com/docs/api-reference/responses/create) |

## Troubleshooting

**Module Not Found**

If you encounter the error `ModuleNotFoundError: No module named 'openai'`, this means you haven't installed the `openai` dependency in your environment. To fix, run `pip install 'strands-agents[openai]'`.

**OpenAI SDK Version**

The Responses API provider requires the OpenAI Python SDK v2.0.0 or later. If you encounter an `ImportError` about the SDK version, upgrade with `pip install --upgrade openai`.

## Advanced Features

### Structured Output

The OpenAI Responses API provider supports structured output through the `responses.parse(...)` endpoint. When you use `Agent.structured_output()`, the Strands SDK uses this to return typed results conforming to your Pydantic model.

```python
from pydantic import BaseModel, Field
from strands import Agent
from strands.models.openai_responses import OpenAIResponsesModel

class PersonInfo(BaseModel):
"""Extract person information from text."""
name: str = Field(description="Full name of the person")
age: int = Field(description="Age in years")
occupation: str = Field(description="Job or profession")

model = OpenAIResponsesModel(
client_args={"api_key": "<KEY>"},
model_id="gpt-4o",
)

agent = Agent(model=model)

result = agent.structured_output(
PersonInfo,
"John Smith is a 30-year-old software engineer working at a tech startup."
)

print(f"Name: {result.name}") # "John Smith"
print(f"Age: {result.age}") # 30
print(f"Job: {result.occupation}") # "software engineer"
```

### Reasoning Models

The Responses API provider supports reasoning models (such as o1 and o3) that include chain-of-thought reasoning in their responses. Reasoning content is automatically captured and streamed as `reasoningContent` events.

## References

- [OpenAI Responses API](https://platform.openai.com/docs/api-reference/responses/create)
- [OpenAI (Chat Completions) Provider](openai.md)
8 changes: 6 additions & 2 deletions docs/user-guide/concepts/model-providers/openai.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
# OpenAI
# OpenAI (Chat Completions)

[OpenAI](https://platform.openai.com/docs/overview) is an AI research and deployment company that provides a suite of powerful language models. The Strands Agents SDK implements an OpenAI provider, allowing you to run agents against any OpenAI or OpenAI-compatible model.
[OpenAI](https://platform.openai.com/docs/overview) is an AI research and deployment company that provides a suite of powerful language models. The Strands Agents SDK implements an OpenAI provider using the [Chat Completions API](https://platform.openai.com/docs/api-reference/chat/create), allowing you to run agents against any OpenAI or OpenAI-compatible model.

!!! note "Looking for the Responses API?"
OpenAI also offers a newer [Responses API](https://platform.openai.com/docs/api-reference/responses/create). The Strands SDK provides a dedicated provider for it — see the [OpenAI Responses API](openai-responses.md) provider.

## Installation

Expand Down Expand Up @@ -203,3 +206,4 @@ Users can pass their own custom OpenAI client to the OpenAIModel for Strands Age

- [API](../../../api-reference/python/models/model.md)
- [OpenAI](https://platform.openai.com/docs/overview)
- [OpenAI Responses API Provider](openai-responses.md)
3 changes: 2 additions & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,8 @@ nav:
- LlamaAPI: user-guide/concepts/model-providers/llamaapi.md
- MistralAI: user-guide/concepts/model-providers/mistral.md
- Ollama: user-guide/concepts/model-providers/ollama.md
- OpenAI: user-guide/concepts/model-providers/openai.md
- OpenAI (Chat Completions): user-guide/concepts/model-providers/openai.md
- OpenAI (Responses API): user-guide/concepts/model-providers/openai-responses.md
- SageMaker: user-guide/concepts/model-providers/sagemaker.md
- Writer: user-guide/concepts/model-providers/writer.md
- Custom Providers: user-guide/concepts/model-providers/custom_model_provider.md
Expand Down