-
-
Notifications
You must be signed in to change notification settings - Fork 1k
Add Roslyn analyzers to detect incorrect usage of BenchmarkDotNet #2837
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
@dotnet-policy-service agree |
08ffcd6 to
e0bd1d6
Compare
...markDotNet.Analyzers.Tests/AnalyzerTests/Attributes/ParamsAllValuesAttributeAnalyzerTests.cs
Outdated
Show resolved
Hide resolved
...markDotNet.Analyzers.Tests/AnalyzerTests/Attributes/ParamsAllValuesAttributeAnalyzerTests.cs
Outdated
Show resolved
Hide resolved
...hmarkDotNet.Analyzers/BenchmarkDotNet.Analyzers.Tests/BenchmarkDotNet.Analyzers.Tests.csproj
Outdated
Show resolved
Hide resolved
src/BenchmarkDotNet.Disassembler.x64/BenchmarkDotNet.Disassembler.x64.csproj
Outdated
Show resolved
Hide resolved
|
I'm not sure whether the analyzers should be automatically enabled with the base BenchmarkDotNet package or be opt-in via its own NuGet package, what do you think? |
|
They should be enabled by default. |
|
So maybe the VSIX package project can be removed then as the analyzer can be referenced through an analyzer project reference. |
I actually think the analyzer should be included directly into the annotations package. Otherwise, it was found that a separate analyzer package pulls in too many unnecessary dependencies. It's a bit complicated to set up the build to do it, though, so I can do it separately after this is merged if you want. [Edit] Or I can push to your branch after your changes are complete. |
|
I'm getting |
4cec602 to
de94c5b
Compare
|
You need to import common.props in the analyzer project, too. |
de94c5b to
c18a417
Compare
|
Solved it. I also needed to add the public key of the assembly to the InternalsVisibleTo attribute. |
Do I just reference the analyzer project from the annotations project? Or do I need to do something special for the analyzers to activate for the user? Mind that we of course don't want the analyzers to activate for the annotations project, but transitively for the user. |
Yes that's sufficient for now. |
|
Also you should move the analyzers test project to under the tests/ directory (and move the analyzers project up 1 level). |
src/BenchmarkDotNet.Analyzers/General/BenchmarkClassAnalyzer.cs
Outdated
Show resolved
Hide resolved
src/BenchmarkDotNet.Analyzers/General/BenchmarkClassAnalyzer.cs
Outdated
Show resolved
Hide resolved
src/BenchmarkDotNet.Annotations/BenchmarkDotNet.Annotations.csproj
Outdated
Show resolved
Hide resolved
|
Is |
It's per category. |
src/BenchmarkDotNet.Analyzers/Attributes/ParamsAttributeAnalyzer.cs
Outdated
Show resolved
Hide resolved
src/BenchmarkDotNet.Analyzers/Attributes/ArgumentsAttributeAnalyzer.cs
Outdated
Show resolved
Hide resolved
|
Please also test these cases: private const int x = 100;
[Params(x)]
public int num;
[Arguments(x)]
public void Benchmark(int i) { }[Params(DifferentType.SomeConst)]
public int num;
[Arguments(DifferentType.SomeConst)]
public void Benchmark(int i) { } |
I had that in the pipeline too. |
… must be non-abstract and generic * Benchmark classes are allowed to be generic if they are either abstract or annotated with at least one [GenericTypeArguments] attribute * Assume that a class can be annotated with more than one [GenericTypeArguments] attribute
* Add a rule that the benchmark class referenced in the type argument of the BenchmarkRunner.Run method cannot be abstract
…hrough its ancestors
…arameter created by using a typeof expression
…ed with the Benchmark attribute when analyzing GenericTypeArguments attribute rules
…ot trigger mismatching type diagnostics * Test all valid attribute value types when performing type matching
…ArgumentsAttribute]" to Run analyzer and remove abstract modifier requirement
… said array for [Arguments] attribute values
… for [Params] and [Arguments] attribute values
…operty on BenchmarkAttribute * Split logic for baseline method analyzer into two rules, also verifying whether they're unique per category
… it works correctly with invalid string values
|
I'm not sure what's causing the build errors to be honest, as I haven't touched the original |
fb839cf to
18e22df
Compare
|
You can check the logs https://github.com/dotnet/BenchmarkDotNet/actions/runs/18837492926 |
|
I see a lot of build errors are due to |
|
All workflows are green except the test-macos (x64) job. Is it an environment issue or something on my part? |
|
Flaky tests, unrelated to these changes. Is there any more work you want to do here, or it's good to go? |
|
I feel very confident in releasing this first iteration of the analyzers. They cover most common scenarios that were requested in the discussion and I or other contributors can always build further on top of this. Thanks for your patience and guidance during the review process! As you mentioned among other things what is remaining is to adjust the pipeline. I noticed that the Analyzers project is built in debug mode:
|
|
Alright, I'll work on that and push to your branch when I get some time. |
|
@silkfire thank you very much for working on this!
The categories were introduced in the early days of BenchmarkDotNet (2016-2017). No prior design work was done: it was an experimental feature that evolved gradually based on user feedback. If some behavior feels unnatural or some corner cases don't make much sense, feel free to suggest changes. |
|
Are we okay with the naming of the project, as it might be conflated with the |
It's fine. It won't be visible to users anyway. |
Fine for me too. I have thoughts on reworking the current Analysers and maybe even dropping this concept from the public API. Currently, they are used mostly for statistical post-checks (moving to perfolizer and pragmastat) and error aggregation (moving to core logic). With the new result data format, we won't need an extension point here since users will have other ways to implement their post-checks. It's a huge piece of work with no ETA yet, but long-term we can resolve this name clash. I don't see any issues with using this name for the new package now. On the compilation level, it won't lead to any conflicts since the new package uses American spelling (Analyzers) while the existing namespace uses British spelling (Analysers). |
This PR introduces an extensive set of analyzers that warns the user of incorrect usage of BenchmarkDotNet. This is something that has been asked since 2017 but has yet to be included as of this date. BDN has a set of validators that use reflection to detect errors but they are only triggered after the benchmark code has been compiled and is about to run.
I had the idea to implement this in 2022 but the testing framework back then wasn't trivial to use so I gave up in the end. Today, the Roslyn analyzer testing is completely testing framework-agnostic, making things considerably easier. It's also trivial to add multiple source files, references and framework assemblies in order to test your analyzer precisely the way you want.
All unit tests are implemented using xUnit v2.
With these analyzers, developers can detect errors early and solve them immediately. The descriptions are very clear and succinct, guiding the user and explaining the reasoning behind the specific rule.
Here's a list of currently implemented analyzers. There are still some remaining but I believe this is a good start and covers most common usage errors. The rest is up for grabs and can be added along the way.
BenchmarkRunner.Run<BenchmarkClass>()and the benchmark classBenchmarkClass(or any of its inherited classes) has no public methods marked with the[Benchmark]attribute[BenchmarkCategory]attribute results in unintended null arrayTODO
[ArgumentsSource]points to a valid method[ParamsSource]points to a valid methodSee #2666 for discussion as well as #389.