Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any way to see if the Vercel AI SDK is using the cache? #40

Open
dschlabach opened this issue Dec 9, 2024 · 4 comments
Open

Comments

@dschlabach
Copy link
Contributor

I've set up a local cache similar to what you've done in packages/examples, and I think it's working, but given that it'll cost me if I'm wrong, is there a way to indicate in the eval tables if an answer is cached?

@mattpocock
Copy link
Owner

Neat idea, I'll add some metadata

@mattpocock
Copy link
Owner

A cheap way to do this (which is now done) is to show the durations more clearly in the traces view. That way you can see that if it took 10ms, it was likely locally cached.

image

@dschlabach
Copy link
Contributor Author

Awesome, I'll add traces to my setup!

@mattpocock
Copy link
Owner

Also added a spot for tracking specific numbers of prompt and completion tokens:

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants