Closed
Description
We need a benchmark suite targeting the runtime of generated code. I've started gathering together code samples into a repository. Here is a list of the projects I plan to extract:
- WAD processing from the rust-doom library -- thanks @cristicbz!
- Something from cargo (@alexcrichton is working on it)
- LALRPOP LR(1) item generation for some simple grammar (hmm)
- Something from regex (see this comment)
- Some HashMap benchmark?
- we'd want to freeze a particular impl of
HashMap
, not just take from the libs
- we'd want to freeze a particular impl of
- @eddyb's implementation of inflate applied to some input
- https://github.com/jorendorff/rust-raytrace or https://github.com/gyng/rust-raytracer
In addition to curating the benchmarks themselves, we need a good way to run them. There are some tasks associated with that:
- Put the projects in a repo with cargo setup such that
cargo bench
will run the relevant tests (this part I expect to get started on --nmatsakis) - Write a script that will execute
cargo bench
and extract the results into one data set - Provide some way to save that data set to disk and to compare against other data sets (e.g., runs of the
master
branch)
Eventually, I would want to integrate this into our regular benchmarking computer so that it can be put up on a website, but for now it'd be nice if you could at least run locally.