Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow erroring on remaining tasks after test exit as an option #1044

Open
evalott100 opened this issue Jan 14, 2025 · 2 comments
Open

Allow erroring on remaining tasks after test exit as an option #1044

evalott100 opened this issue Jan 14, 2025 · 2 comments
Milestone

Comments

@evalott100
Copy link

Problem

In ophyd-async, we want a test environment where if there are unfinished tasks after any test has finished then the test will error. We had quite a lot "task destroyed but it was pending" teardown errors which would pop up in later tests, with no description of which test the task was started in.

To fix this, we made an autouse fixture which would add a finalizer to every test and raise an error if there are any tasks still running after the test finishes, and cancel the task.

https://github.com/bluesky/ophyd-async/blob/16b794a490d7cb91dd695a261133036e89f93da1/tests/conftest.py#L72-L137

This gives us a much more precise error on the exact task/test which hasn't been handled, but most importantly ensures later tests don't raise errors solely by virtue of running when the previous tests task is garbage collected:

ERROR tests/epics/adaravis/test_aravis.py::test_unsupported_trigger_excepts - RuntimeError: Not all tasks closed during test test_unsupported_trigger_excepts:
{<Task cancelling name='Task-2282' coro=<_wrap_awaitable() running at /dls_sw/apps/python/miniforge/4.10.0-0/envs/python3.11/lib/python3.11/asyncio/tasks.py:684> wait_for=<Task finished name='Task-2281' coro=<SignalW.set() done, defined at /scratch/twj43146/Programming/ophyd-async/src/ophyd_async/core/_signal.py:228> result=None created at /dls_sw/apps/python/miniforge/4.10.0-0/envs/python3.11/lib/python3.11/asyncio/tasks.py:374> cb=[gather.<locals>._done_callback() at /dls_sw/apps/python/miniforge/4.10.0-0/envs/python3.11/lib/python3.11/asyncio/tasks.py:754] created at /dls_sw/apps/python/miniforge/4.10.0-0/envs/python3.11/lib/python3.11/asyncio/tasks.py:670>}

Suggestion

I couldn't find details on options to get this behaviour in pytest out of the box, but it seems quite useful. Could something like this be a configurable option and packaged with pytest?

@seifertm
Copy link
Contributor

I can definitely see that it can be desirable to fail a test when the code under test has pending tasks. I also understand the issue of warnings of pending tasks spilling over to other tests. Thanks for the detailed proposal.

Pytest-asyncio wants to transition from using low-level loop calls to using asyncio.Runner. I don't know how this feature can be integrated from the top of my head. I'm sure there's a way, though.

@evalott100
Copy link
Author

Pytest-asyncio wants to transition from using low-level loop calls to using asyncio.Runner. I don't know how this feature can be integrated from the top of my head. I'm sure there's a way, though.

Great, thanks for the response! I'm sure it would require a solution more general and robust than what we did, but with a task heavy codebase we would find some kind of option for this incredibly helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants