You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've run into a few situations where it would benefit us if we had a better view of the number of tokens consumed in a request and response.
Let's augment the data we are capturing for tracing and add in any extra info we may get back from the LLM via 'response_metadata'.
Current understanding is that for some models, the response includes metadata that breaks out the number of tokens used in the request and the response.
The text was updated successfully, but these errors were encountered:
@devjpt23 has begun to work on this issue. I wasn't yet able to formally assign him to this issue.
Looks like I can only assign issues to folks in the Konveyor Org, so formed a new team of 'Collaborators' and invited @devjpt23 to that so he can be assigned future issues.
We've run into a few situations where it would benefit us if we had a better view of the number of tokens consumed in a request and response.
Let's augment the data we are capturing for tracing and add in any extra info we may get back from the LLM via 'response_metadata'.
Current understanding is that for some models, the response includes metadata that breaks out the number of tokens used in the request and the response.
The text was updated successfully, but these errors were encountered: