Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 52 additions & 0 deletions src/content/docs/user-guide/observability-evaluation/metrics.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -293,6 +293,58 @@ This summary provides a complete picture of the agent's execution, including cyc
</Tab>
</Tabs>

## Local Execution Traces

<Tabs>
<Tab label="Python">

In addition to aggregate metrics, the Strands Agents SDK automatically collects **local execution traces** — lightweight, in-memory timing trees that capture the hierarchy and duration of operations within the agent loop. These traces are always collected regardless of OpenTelemetry configuration and are returned directly in the `AgentResult`.

Each trace represents a cycle in the agent loop, with child traces for model invocations and tool calls:

```python
from strands import Agent
from strands_tools import calculator

agent = Agent(tools=[calculator])
result = agent("What is 15 * 8 + 42?")

# Traces are included in the summary output
print(result.metrics.get_summary())
```

Each trace contains:

- **name**: Human-readable label (e.g., "Cycle 1", "stream_messages", "Tool: calculator")
- **duration**: Execution time in seconds
- **children**: Nested traces for operations within the cycle
- **metadata**: Associated data like `cycleId`, `toolUseId`, and `toolName`
- **message**: The model output message (for model invocation traces)

Traces are included in the `get_summary()` output, giving you a complete hierarchical view of agent execution alongside aggregate metrics.
</Tab>
<Tab label="TypeScript">

In addition to aggregate metrics, the Strands Agents SDK automatically collects **local execution traces** — lightweight, in-memory timing trees that capture the hierarchy and duration of operations within the agent loop. These traces are always collected regardless of OpenTelemetry configuration and are returned directly in `AgentResult.traces`.

Each trace is an `AgentTrace` instance representing a cycle in the agent loop, with child traces for model invocations and tool calls:

```typescript
--8<-- "user-guide/observability-evaluation/metrics.ts:local_traces"
```

Each `AgentTrace` contains:

- **name**: Human-readable label (e.g., "Cycle 1", "stream_messages", "Tool: calculator")
- **duration**: Execution time in milliseconds
- **children**: Nested `AgentTrace` instances for operations within the cycle
- **metadata**: Associated data like `cycleId`, `toolUseId`, and `toolName`
- **message**: The model output message (for model invocation traces)

Traces are separate from `AgentMetrics` and are accessed via `result.traces`. Note that `AgentResult.toJSON()` excludes traces and metrics by default to keep API responses lean — access them directly via `result.traces` and `result.metrics`.
</Tab>
</Tabs>

## Best Practices

1. **Monitor Token Usage**: Keep track of token usage to ensure you stay within limits and optimize costs. Set up alerts for when token usage approaches predefined thresholds to avoid unexpected costs.
Expand Down
14 changes: 14 additions & 0 deletions src/content/docs/user-guide/observability-evaluation/metrics.ts
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,20 @@ async function agentLoopMetricsExample() {
// --8<-- [end:agent_loop_metrics]
}

// Local traces example
async function localTracesExample() {
// --8<-- [start:local_traces]
const agent = new Agent({
tools: [notebook],
})

const result = await agent.invoke('What is 15 * 8 + 42?')

// Access traces directly from the result
console.log(JSON.stringify(result.traces))
// --8<-- [end:local_traces]
}

// Metrics summary example
async function metricsSummaryExample() {
// --8<-- [start:metrics_summary]
Expand Down
2 changes: 1 addition & 1 deletion src/content/docs/user-guide/quickstart/typescript.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ And that's it! We now have a running agent with powerful tools and abilities in

## Understanding What Agents Did

After running an agent, you can understand what happened during execution by examining the agent's messages and through traces and metrics. Every agent invocation returns an `AgentResult` object that contains the data the agent used along with (comming soon) comprehensive observability data.
After running an agent, you can understand what happened during execution by examining the agent's messages, traces, and metrics. Every agent invocation returns an `AgentResult` object that contains the data the agent used along with comprehensive observability data including [local execution traces](../observability-evaluation/metrics.md#local-execution-traces) and [metrics](../observability-evaluation/metrics.md).


```typescript
Expand Down
Loading