-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Integrate pytest-subtests #13738
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Integrate pytest-subtests #13738
Conversation
I agree that this feature should be in pytest core because it's a feature in unittest and pytest should aim to be a drop in replacement for unittest (plus the original issue have 40 👍 and no 👎 at time of writing, overwhelming popular support in my book). |
I recall we need to fix marking the owning test case as failed if one subtest fails |
But yeah i want to see this in |
Sounds reasonable |
b43ab38
to
97ee032
Compare
97ee032
to
6b5831f
Compare
5f56d81
to
c93c0e0
Compare
Ready for an initial review folks. |
c93c0e0
to
b569c93
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice to see this happening.
I ran out of time for the review for today, so didn't really get to the implementation parts, but already have some comments so submitting a partial review.
--------------------------- | ||
|
||
While :ref:`traditional pytest parametrization <parametrize>` and ``subtests`` are similar, they have important differences and use cases. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Continuing from the comment above, I can see two ways we can approach this:
1 - Subtests are for "runtime parametrization"
Per comment above, subtests are useful when you have some data dynamically fetched in the test and want individualized reporting for each data value.
The idea here is that parametrization should be the go-to tool, but we offer this subtest tool for this particular scenario.
2 - Subtests are for "sub-testing"
By which I mean, subtests are for when you have one conceptual test, i.e. consider it a complete whole, but just want to break down its reporting to parts.
How do you see it? The reason I'm asking is that it can affect how we document the feature, what we recommend, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think of subtests specially for the first case, but the 2nd case is also useful. Say you test 3 different objects in the test, and would like to see the test results for all the objects, even if the very first fails. Similar use case for pytest-check.
How do you suggest we approach those cases here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pytest-check is deferred assertions with a worse api
it would be neat to have subtests.deferred_assertions()
and allowing people to use the assert statement there
as for the real case of action grouping - whene doing something like a larger acceptance test(as for example done in moinmoin wiki) the goal is to report sections
a way for a section failure to directly bubble trough to fail the full test would be helpful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually didn't think of the "want to see all failures" use case. So it would be nice to document :)
I think my preferred approach to documenting subtests is not as alternative to parametrization, but as useful for a bunch of use cases which the reader can read and add to their "tool set". That's just my idea.
b569c93
to
506c75b
Compare
42f910d
to
ec3144f
Compare
we might want to bikeshed a api around sub-section, subtests and parameter loops a little - not necessarily for doing right now - but for setting up a roadmap |
I was hoping to get this feature into 9.0, if possible. |
we absolutely want this in 9.0 we shouldnt change the api to ensure compat with existing users we should try to ensure lastfailed does rerun tests with failed subtests before releasing 9.0 |
I will add a test, good call. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately ran out of time to do a full review again, but left some comments.
I also noticed two things while I was testing it:
If some or all subtests fail, the parent test is still PASSED, is this intentional?

For some reason the errors are shown as "Captured log call", seems wrong as there are not log calls in my test.

Regarding the name SUBPASS
etc., I saw that regular pass is PASSED
so I wonder it shouldn't be SUBPASSED
etc? (I can check the code later).
--------------------------- | ||
|
||
While :ref:`traditional pytest parametrization <parametrize>` and ``subtests`` are similar, they have important differences and use cases. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually didn't think of the "want to see all failures" use case. So it would be nice to document :)
I think my preferred approach to documenting subtests is not as alternative to parametrization, but as useful for a bunch of use cases which the reader can read and add to their "tool set". That's just my idea.
from typing_extensions import Self | ||
|
||
|
||
def pytest_addoption(parser: Parser) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
Can you explain the rationale for these options -- did someone ask for them in
pytest-subtests
? -
I admit the difference between the options is a bit obscure, I wonder if they can't be folded into a single option?
-
I wonder if this shouldn't be an ini option rather than a cli option? I imagine someone would want to set this if it's too noisy, in which case it more of a project setting. But maybe it's more of a user-preference thing, then it's a CLI flag...
-
Maybe it should be configurable at the test level, using the fixture itself? (Probably not)
-
Should we document the options in the tutorial?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These were external contributions: pytest-dev/pytest-subtests#81 and pytest-dev/pytest-subtests#198.
I'm OK with changing them somehow into a single option, perhaps an option to disable all subtest progress unless it is failure?
Now the top-level test will fail with a message in case it contains failed subtests but otherwise does not contain failed assertions on its own:
|
In addition, enable the plugin in `pytest/__init__.py` and `config/__init__.py`.
d8c7655
to
1c31ff1
Compare
1c31ff1
to
ee3d5db
Compare
This PR copies the files from
pytest-subtests
and performs minimal integration.I'm opening this to gauge whether everyone is on board with integrating this feature into the core.
Why?
Pros
subtests
is a standardunittest
feature, so it makes sense for pytest to support it as well.Cons
TODO ✅
If everyone is on board, I will take the time this week to polish it and get it ready to merge ASAP:
Related