You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Every Agent ultimately calls an LLM. The SDK abstracts models behind two lightweight
@@ -54,22 +55,11 @@ Second, you can set a default model for a `Runner` instance. If you don't set a
54
55
55
56
When you use any of GPT-5's reasoning models ([`gpt-5`](https://platform.openai.com/docs/models/gpt-5), [`gpt-5-mini`](https://platform.openai.com/docs/models/gpt-5-mini), or [`gpt-5-nano`](https://platform.openai.com/docs/models/gpt-5-nano)) this way, the SDK applies sensible `modelSettings` by default. Specifically, it sets both `reasoning.effort` and `verbosity` to `"low"`. To adjust the reasoning effort for the default model, pass your own `modelSettings`:
56
57
57
-
```ts
58
-
import { Agent } from'@openai/agents';
59
-
60
-
const myAgent =newAgent({
61
-
name: 'My Agent',
62
-
instructions: "You're a helpful agent.",
63
-
modelSettings: {
64
-
providerData: {
65
-
reasoning: { effort: 'minimal' },
66
-
text: { verbosity: 'low' },
67
-
},
68
-
// If OPENAI_DEFAULT_MODEL=gpt-5 is set, passing only modelSettings works.
69
-
// It's also fine to pass a GPT-5 model name explicitly:
70
-
// model: 'gpt-5',
71
-
});
72
-
```
58
+
<Code
59
+
lang="typescript"
60
+
code={gpt5DefaultModelSettingsExample}
61
+
title="Customize GPT-5 default settings"
62
+
/>
73
63
74
64
For lower latency, using either [`gpt-5-mini`](https://platform.openai.com/docs/models/gpt-5-mini) or [`gpt-5-nano`](https://platform.openai.com/docs/models/gpt-5-nano) with `reasoning.effort="minimal"` will often return responses faster than the default settings. However, some built-in tools (such as file search and image generation) in Responses API do not support `"minimal"` reasoning effort, which is why this Agents SDK defaults to `"low"`.
0 commit comments