fix(chat): Improve agent loop tracing#303
Conversation
Each iteration of the pi-agent-core agent loop now produces its own gen_ai.chat Sentry span via a traced streamFn wrapper. This gives visibility into individual LLM calls (input/output messages, token usage, finish reasons) nested under the parent gen_ai.invoke_agent span. Co-Authored-By: Claude <noreply@anthropic.com>
|
@obostjancic is attempting to deploy a commit to the Sentry Team on Vercel. A member of the Team first needs to authorize it. |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit a9661dd. Configure here.
| () => { | ||
| span.end(); | ||
| }, | ||
| ); |
There was a problem hiding this comment.
Span leaks if success callback throws unexpectedly
Low Severity
The stream.result().then(successHandler, rejectionHandler) creates a floating promise. If any statement inside the success handler (e.g., buildChatEndAttributes or span.setAttribute) throws before span.end() is reached, the span never ends (leaks) and the error becomes an unhandled promise rejection. Wrapping the success handler body in try/finally with span.end() in finally would ensure the span always closes, matching the defensive style of the catch block at line 114.
Reviewed by Cursor Bugbot for commit a9661dd. Configure here.
| } catch (error) { | ||
| span.end(); | ||
| throw error; | ||
| } |
There was a problem hiding this comment.
Error spans lack status, appearing successful in Sentry
Medium Severity
When base() throws (catch block) or stream.result() rejects (rejection handler), the span is ended without setting its status to error. Unlike Sentry.startSpan (used elsewhere via withSpan), startInactiveSpan has no automatic error-status propagation. Failed LLM calls will appear as successful in Sentry's trace waterfall, undermining the tracing improvement this PR aims to deliver. The same gap exists for the success path when stopReason is "error".
Additional Locations (1)
Reviewed by Cursor Bugbot for commit a9661dd. Configure here.


tracedStreamFnwrapper that creates agen_ai.chatSentry span for each LLM call inside the pi-agent-core agent loop, capturing input/output messages, token usage, finish reasons, and response modelAgentvia thestreamFnoption so spans nest naturally under the existinggen_ai.invoke_agentparentgen_ai.invoke_agent→gen_ai.chatspan hierarchy rule in the tracing specBefore:

After:

NOTE: All of the code was written by Claude based on sentry skills, manual instrumentation docs and docs that it fetched on demand as well as a bit of steering from my side.