-
Notifications
You must be signed in to change notification settings - Fork 18
test: add fuzzer #253
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: add fuzzer #253
Conversation
Signed-off-by: Stephen Crowe <[email protected]>
|
Should this be part of the pytest suite? |
I wasn't sure. A fuzzer might run for a long period of time if you do a lot of iterations. I'll enable it as a pytest as well with a smaller number of iterations that complete in a reasonable period of time. |
Signed-off-by: Stephen Crowe <[email protected]>
Signed-off-by: Stephen Crowe <[email protected]>
|
AWS-Samuel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
High level comment that there's some hard-coded 2023-09 strings we might want to parametrize. I don't think there'll be a new spec version in the near future so we can probably put this off
| except EXPECTED_EXCEPTIONS: | ||
| successes += 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is one concern here. We expect some of these inputs to raise exceptions (e.g. DecodeValidationError) but others to not raise exceptions. But we do not have logic that to know which templates we expect to raise such errors.
So there is a chance that we are ignoring errors in our job/environment template decoder or allowing bad inputs through.
Is that something we are okay with? Are the goals of the test to only identify unhandled exceptions hidden in the decoder?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! My original goal was to just figure out why the library sometimes raises exceptions that aren't DecodeValidationError NotImplementedError because it broke some error handling code based on this library. Expanding this to test for correctness also makes sense. What do you think about leaving that for the future if we want to add it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with that since the repository already has existing correctness tests.



What was the problem/requirement? (What/Why)
There are gaps in the parser where a malformed template can result in an unexpected error.
What was the solution? (How)
Add a fuzzer.
What is the impact of this change?
It discovered the issue we were seeing in our code that depends on this parser: #252
How was this change tested?
Ran it against this codebase and discovered a bug.
Was this change documented?
Yes, script is self-documenting.
Is this a breaking change?
No
Does this change impact security?
Yes, improvement
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.