Getting started guide for implementing stateful AI agents using Azure Durable Functions orchestration with automatic checkpointing and replay semantics.
- Python 3.10+ runtime environment
- Azure Functions Core Tools v4.x (
npm install -g azure-functions-core-tools@4 --unsafe-perm true) - Azure OpenAI service endpoint with model deployment
- Docker (Optional for the Durable Task Scheduler Emulator)
This framework is designed specifically for Azure Functions applications. You need to create a Python Functions app to use the OpenAI Agent SDK Integration with Azure Durable Functions (Preview).
For new users: If you're new to Azure Functions, follow these guides to get started:
For experienced Functions users: Create a new Python Functions app or use an existing one.
Note: The samples-v2/openai_agents directory contains a complete working example you can reference or use as a starting point.
Create and activate a virtual environment to isolate dependencies:
# Create virtual environment
python -m venv venv
# Activate virtual environment
# On macOS/Linux:
source venv/bin/activate
# On Windows:
venv\Scripts\activateAdd the OpenAI Agents dependencies to your requirements.txt:
azure-functions-durable
azure-functions
openai
openai-agents
azure-identity
Then install them:
pip install -r requirements.txtDependency & compatibility: The
azure-functions-durablepackage does NOT declareopenaioropenai-agentsas dependencies. If you need the OpenAI Agents Integration for Reliability on Azure Functions (Preview), explicitly addopenaiandopenai-agentsto yourrequirements.txt(seesamples-v2/openai_agents/requirements.txt). This integration is validated with the versions currently pinned there (openai==1.107.3,openai-agents==0.3.0). Because the OpenAI ecosystem changes rapidly, if you encounter issues, first pin to these versions to rule out a version mismatch before filing an issue.
Durable Task Scheduler is the preferred backend for this integration as it provides enhanced performance, better observability, and simplified local development. While not a hard requirement, it's strongly recommended for production workloads.
IMPORTANT: Ensure your function app is using the preview extension bundle version 4.34.0 or higher by specifying it in host.json:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
"version": "[4.34.0, 5.0.0)"
}
}There are two ways to configure the backend locally:
The emulator simulates a scheduler and taskhub in a Docker container, making it ideal for development and learning.
- Pull the Docker Image for the Emulator:
docker pull mcr.microsoft.com/dts/dts-emulator:latest- Run the Emulator:
docker run --name dtsemulator -d -p 8080:8080 -p 8082:8082 mcr.microsoft.com/dts/dts-emulator:latest-
Wait for container readiness (approximately 10-15 seconds)
-
Verify emulator status:
curl http://localhost:8080/healthNote: The sample code automatically uses the default emulator settings (endpoint: http://localhost:8080, taskhub: default). No additional environment variables are required.
If you prefer using Azure Storage as the backend (legacy approach):
# Uses local storage emulator - requires Azurite
npm install -g azurite
azurite --silent --location /tmp/azurite --debug /tmp/azurite/debug.logUpdate local.settings.json:
{
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true"
}
}- Install project dependencies:
pip install -r requirements.txt- Configure service settings:
Update local.settings.json with your service configuration:
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "python",
"AZURE_OPENAI_ENDPOINT": "https://<resource-name>.openai.azure.com/",
"AZURE_OPENAI_DEPLOYMENT": "<deployment-name>",
"OPENAI_DEFAULT_MODEL": "<deployment-name>",
"AZURE_OPENAI_API_VERSION": "2024-10-01-preview",
"DURABLE_TASK_SCHEDULER_CONNECTION_STRING": "Endpoint=http://localhost:8080;Authentication=None;",
"TASKHUB": "default"
}
}Execute the included hello world sample.
# basic/hello_world.py - Standard OpenAI Agent
from agents import Agent, Runner
def main():
agent = Agent(
name="Assistant",
instructions="You only respond in haikus.",
)
result = Runner.run_sync(agent, "Tell me about recursion in programming.")
return result.final_outputDurable Transformation: The @app.durable_openai_agent_orchestrator decorator in function_app.py wraps this agent execution within a Durable Functions orchestrator, providing agent state persisted at each LLM and tool interaction.
- Start the Azure Functions host:
Navigate to the samples-v2/openai_agents directory and run:
func start --port 7071- Initiate orchestration instance:
curl -X POST http://localhost:7071/api/orchestrators/hello_world \
-H "Content-Type: application/json"Response contains orchestration instance metadata:
{
"id": "f4b2c8d1e9a7...",
"statusQueryGetUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/f4b2c8d1e9a7...",
"sendEventPostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/f4b2c8d1e9a7.../raiseEvent/{eventName}",
"terminatePostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/f4b2c8d1e9a7.../terminate",
"purgeHistoryDeleteUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/f4b2c8d1e9a7..."
}- Monitor execution via Durable Task Scheduler dashboard:
Navigate to http://localhost:8082 for real-time orchestration monitoring:
- Instance execution timeline with LLM call latencies
- State transition logs and checkpoint data
- Retry attempt tracking and failure analysis
- Reference Reference Documentation for complete technical details.