feat(core): Accumulate tokens for gen_ai.invoke_agen
t spans from child LLM calls
#17281
+54
−0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem
Currently,
gen_ai.invoke_agent
spans (representing operations likegenerateText()
) contain inaccurate token usage information. Users can only see token data on individualgen_ai.generate_text
child spans, but the tokens are not accumulated across nested spans, making it difficult to track total token consumption for complete AI operations.Solution
Implement token accumulation for
gen_ai.invoke_agent
spans by iterating over client LLM child spans and aggregating their token usage.