Documentation ◆ Samples ◆ Python SDK ◆ Tools ◆ Agent Builder ◆ MCP Server
Strands Agents is a simple yet powerful SDK that takes a model-driven approach to building and running AI agents. The TypeScript SDK brings key features from the Python Strands framework to Node.js environments, enabling type-safe agent development for everything from simple assistants to complex workflows.
- 🪶 Lightweight & Flexible: Simple agent loop that works seamlessly in Node.js and browser environments
- 🔒 Type-Safe Tools: Define tools easily using Zod schemas for robust input validation and type inference
- 📋 Structured Output: Get type-safe, validated responses from LLMs using Zod schemas with automatic retry on validation errors
- 🔌 Model Agnostic: First-class support for Amazon Bedrock and OpenAI, with extensible architecture for custom providers
- 🔗 Built-in MCP: Native support for Model Context Protocol (MCP) clients, enabling access to external tools and servers
- ⚡ Streaming Support: Real-time response streaming for better user experience
- 🎣 Extensible Hooks: Lifecycle hooks for monitoring and customizing agent behavior
- 💬 Conversation Management: Flexible strategies for managing conversation history and context windows
- 🤝 Multi-Agent Orchestration: Graph and Swarm patterns for coordinating multiple agents
Ensure you have Node.js 20+ installed, then:
npm install @strands-agents/sdkimport { Agent } from '@strands-agents/sdk'
// Create agent (uses default Amazon Bedrock provider)
const agent = new Agent()
// Invoke
const result = await agent.invoke('What is the square root of 1764?')
console.log(result)Note: For the default Amazon Bedrock model provider, you'll need AWS credentials configured and model access enabled for Claude Sonnet 4 in your region.
The Agent class is the central orchestrator that manages the interaction loop between users, models, and tools.
import { Agent } from '@strands-agents/sdk'
const agent = new Agent({
systemPrompt: 'You are a helpful assistant.',
})Switch between model providers easily:
Amazon Bedrock (Default)
import { Agent, BedrockModel } from '@strands-agents/sdk'
const model = new BedrockModel({
region: 'us-east-1',
modelId: 'anthropic.claude-3-5-sonnet-20240620-v1:0',
maxTokens: 4096,
temperature: 0.7
})
const agent = new Agent({ model })OpenAI
import { Agent } from '@strands-agents/sdk'
import { OpenAIModel } from '@strands-agents/sdk/openai'
// Automatically uses process.env.OPENAI_API_KEY and defaults to gpt-4o
const model = new OpenAIModel()
const agent = new Agent({ model })Access responses as they are generated:
const agent = new Agent()
console.log('Agent response stream:')
for await (const event of agent.stream('Tell me a story about a brave toaster.')) {
console.log('[Event]', event.type)
}Tools enable agents to interact with external systems and perform actions. Create type-safe tools using Zod schemas:
import { Agent, tool } from '@strands-agents/sdk'
import { z } from 'zod'
const weatherTool = tool({
name: 'get_weather',
description: 'Get the current weather for a specific location.',
inputSchema: z.object({
location: z.string().describe('The city and state, e.g., San Francisco, CA'),
}),
callback: (input) => {
// input is fully typed based on the Zod schema
return `The weather in ${input.location} is 72°F and sunny.`
},
})
const agent = new Agent({
tools: [weatherTool],
})
await agent.invoke('What is the weather in San Francisco?')Vended Tools: The SDK includes optional pre-built tools:
- Notebook Tool: Manage text-based notebooks for persistent note-taking
- File Editor Tool: Perform file system operations (read, write, edit files)
- HTTP Request Tool: Make HTTP requests to external APIs
Get type-safe, validated responses from LLMs by defining the expected output structure with Zod schemas. The agent automatically validates the LLM's response and retries on validation errors:
import { Agent } from '@strands-agents/sdk'
import { z } from 'zod'
const PersonSchema = z.object({
name: z.string().describe('Name of the person'),
age: z.number().describe('Age of the person'),
occupation: z.string().describe('Occupation of the person')
})
// Configure structured output at the agent level
const agent = new Agent({
structuredOutputSchema: PersonSchema
})
const result = await agent.invoke('John Smith is a 30 year-old software engineer')
// result.structuredOutput is fully typed based on the schema
console.log(result.structuredOutput.name) // "John Smith"
console.log(result.structuredOutput.age) // 30Error handling: The agent automatically retries with validation feedback when the LLM provides invalid output. If validation ultimately fails, a StructuredOutputError is thrown:
import { StructuredOutputError } from '@strands-agents/sdk'
try {
const result = await agent.invoke('Extract person info...')
console.log(result.structuredOutput)
} catch (error) {
if (error instanceof StructuredOutputError) {
console.error('Validation failed:', error.message)
}
}Seamlessly integrate Model Context Protocol (MCP) servers:
import { Agent, McpClient } from "@strands-agents/sdk";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
// Create a client for a local MCP server
const documentationTools = new McpClient({
transport: new StdioClientTransport({
command: "uvx",
args: ["awslabs.aws-documentation-mcp-server@latest"],
}),
});
const agent = new Agent({
systemPrompt: "You are a helpful assistant using MCP tools.",
tools: [documentationTools], // Pass the MCP client directly as a tool source
});
await agent.invoke("Use a random tool from the MCP server.");
await documentationTools.disconnect();Coordinate multiple agents using built-in orchestration patterns.
Graph — You define a deterministic execution plan. Agents run as nodes in a directed graph, with edges controlling execution order. Parallel execution is supported, and downstream nodes run once all dependencies complete.
import { Agent, BedrockModel, Graph } from '@strands-agents/sdk'
const model = new BedrockModel({ maxTokens: 1024 })
const researcher = new Agent({
model,
id: 'researcher',
systemPrompt: 'Research the topic and provide key facts.',
})
const writer = new Agent({
model,
id: 'writer',
systemPrompt: 'Rewrite the research into a polished paragraph.',
})
const graph = new Graph({
nodes: [researcher, writer],
edges: [['researcher', 'writer']],
})
const result = await graph.invoke('What is the largest ocean?')Swarm — The agents decide the routing. Each agent chooses whether to hand off to another agent or produce a final response, making the execution path dynamic and model-driven.
import { Agent, BedrockModel, Swarm } from '@strands-agents/sdk'
const model = new BedrockModel({ maxTokens: 1024 })
const researcher = new Agent({
model,
id: 'researcher',
description: 'Researches a topic and gathers key facts.',
systemPrompt: 'Research the answer, then hand off to the writer.',
})
const writer = new Agent({
model,
id: 'writer',
description: 'Writes a polished final answer.',
systemPrompt: 'Write the final answer. Do not hand off.',
})
const swarm = new Swarm({
nodes: [researcher, writer],
start: 'researcher',
maxSteps: 4,
})
const result = await swarm.invoke('What is the largest ocean?')Both patterns support streaming via .stream() for real-time access to handoff and node execution events. See the examples directory for complete working samples.
For detailed guidance, tutorials, and concept overviews, please visit:
- Official Documentation: Comprehensive guides and tutorials
- API Reference: Complete API documentation
- Examples: Sample applications
- Contributing Guide: Development setup and guidelines
We welcome contributions! See our Contributing Guide for details on:
- Development setup and environment
- Testing and code quality standards
- Pull request process
- Code of Conduct
- Security issue reporting
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
See CONTRIBUTING for more information on reporting security issues.
Strands Agents is currently in public preview. During this period:
- APIs may change as we refine the SDK
- We welcome feedback and contributions