Skip to content

Conversation

@crowecawcaw
Copy link
Contributor

What was the problem/requirement? (What/Why)

There are gaps in the parser where a malformed template can result in an unexpected error.

What was the solution? (How)

Add a fuzzer.

What is the impact of this change?

It discovered the issue we were seeing in our code that depends on this parser: #252

How was this change tested?

Ran it against this codebase and discovered a bug.

Was this change documented?

Yes, script is self-documenting.

Is this a breaking change?

No

Does this change impact security?

Yes, improvement

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Signed-off-by: Stephen Crowe <[email protected]>
@crowecawcaw crowecawcaw requested a review from a team as a code owner December 15, 2025 17:10
@crowecawcaw crowecawcaw enabled auto-merge (squash) December 15, 2025 18:24
@prestomation
Copy link
Contributor

Should this be part of the pytest suite?

@crowecawcaw
Copy link
Contributor Author

Should this be part of the pytest suite?

I wasn't sure. A fuzzer might run for a long period of time if you do a lot of iterations. I'll enable it as a pytest as well with a smaller number of iterations that complete in a reasonable period of time.

@sonarqubecloud
Copy link

Copy link
Contributor

@AWS-Samuel AWS-Samuel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

High level comment that there's some hard-coded 2023-09 strings we might want to parametrize. I don't think there'll be a new spec version in the near future so we can probably put this off

Comment on lines +233 to +234
except EXPECTED_EXCEPTIONS:
successes += 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is one concern here. We expect some of these inputs to raise exceptions (e.g. DecodeValidationError) but others to not raise exceptions. But we do not have logic that to know which templates we expect to raise such errors.

So there is a chance that we are ignoring errors in our job/environment template decoder or allowing bad inputs through.

Is that something we are okay with? Are the goals of the test to only identify unhandled exceptions hidden in the decoder?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! My original goal was to just figure out why the library sometimes raises exceptions that aren't DecodeValidationError NotImplementedError because it broke some error handling code based on this library. Expanding this to test for correctness also makes sense. What do you think about leaving that for the future if we want to add it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with that since the repository already has existing correctness tests.

@crowecawcaw crowecawcaw merged commit 1c0263a into OpenJobDescription:mainline Dec 19, 2025
25 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants