Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crate workshop: Running Evaluations #44

Open
deanwampler opened this issue Mar 4, 2025 · 0 comments
Open

Crate workshop: Running Evaluations #44

deanwampler opened this issue Mar 4, 2025 · 0 comments

Comments

@deanwampler
Copy link
Contributor

As part of the TSEI promotion plan, this workshop expands on the brief coverage of how to run benchmarks in #43.

Using a canned set of benchmarks, it will focus on the unified reference stack for running evaluations (e.g., a set of benchmarks) both offline during development and online during inference. What is different about these deployments, such as performance considerations and the kinds of evaluations that make sense in each context?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Todo
Development

No branches or pull requests

1 participant