Skip to content

Create a benchmark suite for uncovering runtime regressions #31265

Closed
@nikomatsakis

Description

@nikomatsakis

We need a benchmark suite targeting the runtime of generated code. I've started gathering together code samples into a repository. Here is a list of the projects I plan to extract:

In addition to curating the benchmarks themselves, we need a good way to run them. There are some tasks associated with that:

  • Put the projects in a repo with cargo setup such that cargo bench will run the relevant tests (this part I expect to get started on --nmatsakis)
  • Write a script that will execute cargo bench and extract the results into one data set
  • Provide some way to save that data set to disk and to compare against other data sets (e.g., runs of the master branch)

Eventually, I would want to integrate this into our regular benchmarking computer so that it can be put up on a website, but for now it'd be nice if you could at least run locally.

Metadata

Metadata

Assignees

No one assigned

    Labels

    C-tracking-issueCategory: An issue tracking the progress of sth. like the implementation of an RFCE-mentorCall for participation: This issue has a mentor. Use #t-compiler/help on Zulip for discussion.P-mediumMedium priorityT-compilerRelevant to the compiler team, which will review and decide on the PR/issue.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions