Checks
Strands Version
1.35.0
Python Version
3.12
Operating System
al2023
Installation Method
pip
Steps to Reproduce
- Configure a Bedrock model with guardrails, guardrail traces, and otel telemetry enabled:
from strands import Agent
from strands.models import BedrockModel
model = BedrockModel(
model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
guardrail_id="<your_guardrail_id>",
guardrail_version="DRAFT",
guardrail_trace="enabled", # or enabled_full
)
agent = Agent(model=model)
- Invoke the agent in streaming mode (the default) with a prompt that triggers the guardrail (e.g. a topic policy violation):
for event in agent.stream("trigger your guardrail here"):
pass
- Inspect the OpenTelemetry spans emitted by the SDK — the guardrail trace data is missing
Expected Behavior
When a guardrail fires (stopReason: "guardrail_intervened"), the Bedrock Converse API returns detailed trace data in the streaming response metadata describing which policy was triggered and why (topic policy, content filter, word policy, PII detection, etc.). This trace data should be propagated through the streaming pipeline and recorded as a span event, making it visible in any OTLP-compatible backend (Langfuse, Jaeger, Datadog, etc.).
Actual Behavior
The guardrail trace data is silently dropped during stream processing. extract_usage_metrics() in src/strands/event_loop/streaming.py only extracts usage and metrics from the MetadataEvent, ignoring the trace field. The data never reaches telemetry, so there is no observability into which guardrail policy fired or why.
This is despite the SDK already:
- Requesting trace data via
guardrailConfig.trace: "enabled"
- Receiving it from Bedrock in the streaming response metadata
- Having full type definitions for it (
GuardrailTrace, GuardrailAssessment, etc. in types/guardrails.py)
- Including
trace: Trace | None on MetadataEvent
Additional Context
- Strands SDK version: v1.35.0
- The gap is specifically in the streaming path (
process_stream() → extract_usage_metrics() → ModelStopReason → event_loop → tracer)
- The trace data returned by Bedrock looks like:
{
"guardrail": {
"inputAssessment": {
"<assessment_id>": {
"topicPolicy": {
"topics": [{"name": "Prior Authorization", "type": "DENY", "action": "BLOCKED"}]
},
"contentPolicy": { "filters": [...] },
"wordPolicy": { "customWords": [...] },
"sensitiveInformationPolicy": { "piiEntities": [...] }
}
}
}
}
Possible Solution
Propagate the trace field through the streaming pipeline:
extract_usage_metrics() in streaming.py → also extract trace from the metadata event
ModelStopReason in _events.py → carry the trace as an optional property
event_loop.py → pass trace to tracer.end_model_invoke_span()
tracer.py → record it as a gen_ai.guardrail.assessment span event
I'm happy to raise the PR myself, if this seems reasonable
Related Issues
#1925
Checks
Strands Version
1.35.0
Python Version
3.12
Operating System
al2023
Installation Method
pip
Steps to Reproduce
Expected Behavior
When a guardrail fires (
stopReason: "guardrail_intervened"), the Bedrock Converse API returns detailed trace data in the streaming response metadata describing which policy was triggered and why (topic policy, content filter, word policy, PII detection, etc.). This trace data should be propagated through the streaming pipeline and recorded as a span event, making it visible in any OTLP-compatible backend (Langfuse, Jaeger, Datadog, etc.).Actual Behavior
The guardrail trace data is silently dropped during stream processing.
extract_usage_metrics()insrc/strands/event_loop/streaming.pyonly extractsusageandmetricsfrom theMetadataEvent, ignoring thetracefield. The data never reaches telemetry, so there is no observability into which guardrail policy fired or why.This is despite the SDK already:
guardrailConfig.trace: "enabled"GuardrailTrace,GuardrailAssessment, etc. intypes/guardrails.py)trace: Trace | NoneonMetadataEventAdditional Context
process_stream()→extract_usage_metrics()→ModelStopReason→event_loop→tracer){ "guardrail": { "inputAssessment": { "<assessment_id>": { "topicPolicy": { "topics": [{"name": "Prior Authorization", "type": "DENY", "action": "BLOCKED"}] }, "contentPolicy": { "filters": [...] }, "wordPolicy": { "customWords": [...] }, "sensitiveInformationPolicy": { "piiEntities": [...] } } } } }Possible Solution
Propagate the
tracefield through the streaming pipeline:extract_usage_metrics()instreaming.py→ also extracttracefrom the metadata eventModelStopReasonin_events.py→ carry the trace as an optional propertyevent_loop.py→ pass trace totracer.end_model_invoke_span()tracer.py→ record it as agen_ai.guardrail.assessmentspan eventI'm happy to raise the PR myself, if this seems reasonable
Related Issues
#1925