Skip to content

Performance benchmarking using airspeed velocity #277

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
galenseilis opened this issue Apr 9, 2025 · 3 comments
Open

Performance benchmarking using airspeed velocity #277

galenseilis opened this issue Apr 9, 2025 · 3 comments

Comments

@galenseilis
Copy link
Contributor

galenseilis commented Apr 9, 2025

It may be fair to say that if one is writing discrete event simulations in pure Python that performance is not their 'top' priority. Python's dynamic typing and garbage collection preclude being in the top tier of performance among DES tools.

But I think that performance still matters and tracking and benchmarking can put some observability on performance issues.

I've been looking into airspeed velocity. It supports running benchmarks across commits so you can see how things have improved or worsened. It is kind of like writing unit tests, except that they measure run times and memory usage.

I suggest trying this out with Ciw.

@drvinceknight
Copy link
Contributor

OOOO I hadn't heard of asv before. I'm using pytest-benchmark on another project.

I have a question about asv:

Looking at the documentation for asv briefly it looks like writing the benchmark is a bit more work than for pytest-benchmark (which integrates nicely with pytest) but you mention "running benchmarks across commits" which is something that pytest-benchmark does not support. Could you point me at the docs for that specific feature (I had a lazy look but couldn't find it) and do you happen to have any more insight in to comparing pytest-benchmark with asv?

While I hope my question is helpful for ciw I'm also somewhat shamefully asking for my other projects :)

@galenseilis
Copy link
Contributor Author

OOOO I hadn't heard of asv before. I'm using pytest-benchmark on another project.

I have a question about asv:

Looking at the documentation for asv briefly it looks like writing the benchmark is a bit more work than for pytest-benchmark (which integrates nicely with pytest) but you mention "running benchmarks across commits" which is something that pytest-benchmark does not support. Could you point me at the docs for that specific feature (I had a lazy look but couldn't find it) and do you happen to have any more insight in to comparing pytest-benchmark with asv?

While I hope my question is helpful for ciw I'm also somewhat shamefully asking for my other projects :)

Nice comparison to pytest-benchmark. The main difference is that asv is about tracking performance benchmarks of the Python project across git commits. If someone doesn't want that then asv is not a good choice as that's primarily what it does.

The package being primarily targeted at doing this performance tracking, you don't need to do anything other than its setup. asv is configurable, but none of the configuration, AFAIK, turns off this performance tracking. It is the core feature of the package to do that. There is not a specific place in the docs.

In the examples I have seen, people include asv as part of their continuous integration procedures.

@drvinceknight
Copy link
Contributor

Cool. This sounds nice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants