-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to capture token usage in @track-decorated LLM calls where response.usage is missing? #1636
Comments
Can you share a couple of small code snippets ? Will make it easier to recommend the right solution for you |
I'm currently using OPiK's Problem: Missing Token Usage When making LLM calls via LlamaIndex’s 1. LLM Call (Tracked)
2. Agent Setup (LlamaIndex) from llama_index.core.agent import FunctionCallingAgent
def _create_agent(self, tools):
return FunctionCallingAgent.from_tools(
tools=tools,
llm=self.llm, # OpenAI or Azure client
verbose=False,
system_prompt=self.system_prompt,
) What's Happening
What I'm Looking For Is there a way to:
I’d prefer not to duplicate or wrap every LLM call manually just to inject token usage. |
I'll take a look but seems like the issue might be with LlamaIndex right ? If they don't surface the token usage, I don't think we can compute usage on our side |
Hi OPiK team,
I’m using the @track decorator in a hybrid orchestration setup where each step is an agent function (e.g. UserProfileAgent, ReviewAgent, etc.), and inside these functions, I make calls to LLMs (e.g. via llm.acomplete() or FunctionCallingAgent.aquery() from LlamaIndex).
Everything is being tracked fine — spans are visible in the OPiK UI with input/output metadata — but I’m not seeing token usage for the LLM calls.
What I’ve Tried:
Questions:
The text was updated successfully, but these errors were encountered: