Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple parser (micro-)benchmarking #1994

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

evantypanski
Copy link
Contributor

Draft PR as I work on it, but thought it might be useful/interesting.

Initial run with the two cases I ran:

--------------------------------------------------------------------------------------
Benchmark                                            Time             CPU   Iterations
--------------------------------------------------------------------------------------
benchmark_parser/Benchmark::WithUnit/100         12910 ns        12849 ns        55722
benchmark_parser/Benchmark::WithUnit/1000       120478 ns       120480 ns         5742
benchmark_parser/Benchmark::WithUnit/10000     1140806 ns      1140793 ns          588
benchmark_parser/Benchmark::WithUnit/100000   11261903 ns     11261919 ns           62
benchmark_parser/Benchmark::Regex/100             5099 ns         5102 ns       136973
benchmark_parser/Benchmark::Regex/1000           39491 ns        39493 ns        17754
benchmark_parser/Benchmark::Regex/10000         379056 ns       378989 ns         1840
benchmark_parser/Benchmark::Regex/100000       3784368 ns      3784473 ns          184

I still want to get some details from previous releases (if possible) and maybe get a way to modify the C++ code generated from Spicy (maybe just from adding the files to SOURCES or something)

this just piggybacks on the --enable-benchmark configure flag. It generates a binary spicy-rt-parsing-benchmark

@evantypanski
Copy link
Contributor Author

I went back and ran these benchmarks until release 1.7 (1.6 had some options changes that made the spicyc command fail and I didn't look at it any more than that). Here are the results from the largest input:

Release Benchmark::WithUnit Benchmark::Regex
main 11261903 ns 3784368 ns
1.12 13688066 ns 4265321 ns
1.11 11623078 ns 3776724 ns
1.10 12080949 ns 3939922 ns
1.9 15838365 ns 3774306 ns
1.8 15894474 ns 3764055 ns
1.7 18471877 ns 3867579 ns

Note that these numbers can be quite different per-run. But, there's a noticeable speedup for units/vectors in that time it seems

@rsmmr
Copy link
Member

rsmmr commented Mar 18, 2025

1.12 13688066 ns 4265321 ns

Note that these numbers can be quite different per-run.

Do you have a sense what's going on with 1.12? Could that just be a measurement glitch, or did we actually make it worse?

@evantypanski
Copy link
Contributor Author

Do you have a sense what's going on with 1.12?

I'm pretty sure that's just the relative flakiness of the measurements. I just ran the benchmark 20 times and 4 of them were around the 1.12 values in the table, so I'd guess those were just unlucky. Fortunately, none get to the 1.7 release numbers for units, though - so the trend stands

I just ran them to see a general trend though, which I think it shows pretty well. Not sure why they're so variable with nothing else going on on my laptop ¯\_(ツ)_/¯

@evantypanski evantypanski force-pushed the topic/etyp/benchmarking-parsers branch from e99b49a to 5873eda Compare March 20, 2025 13:38
@evantypanski evantypanski force-pushed the topic/etyp/benchmarking-parsers branch from 5873eda to 847864c Compare March 20, 2025 13:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants