Breaking change in 0.23.*
tkukushkin opened this issue ยท 60 comments
Hello! Something has been broken with the latest pytest-asyncio releases.
Consider such code:
import asyncio
import pytest
@pytest.fixture(scope='session')
def event_loop():
loop = asyncio.get_event_loop_policy().new_event_loop()
yield loop
loop.close()
@pytest.fixture(scope='session')
async def some_async_fixture():
return asyncio.get_running_loop()
async def test_something(some_async_fixture):
assert asyncio.get_running_loop() is some_async_fixturepytest.ini:
[pytest]
asyncio_mode = autoThis test passes with pytest-asyncio 0.21.1 but fails with 0.23.0. I am not sure if it's ok, but if it is, IMHO it is breaking change.
I confirm, we use similar setup with 2 session level fixtures (one to redefine event loop, another for our own purposes), tests don't work anymore, complain either about "The future belongs to a different loop than the one specified as the loop argument" or "Event loop is closed".
The version 0.23.0 changelog is already mentioning the breaking change: https://github.com/pytest-dev/pytest-asyncio/releases/tag/v0.23.0
I went through the same, the new way to do it can be seen in this PR https://github.com/pytest-dev/pytest-asyncio/pull/662/files
Would be nice to have some documentation on to how to migrate to the new way.
It says "This release is backwards-compatible with v0.21. Changes are non-breaking, unless you upgrade from v0.22."
@albertferras-vrf the changelog mentions asyncio_event_loop mark removal, I think it is only about upgrading from 0.22.
You are right, I forgot about that part
It says "This release is backwards-compatible with v0.21. Changes are non-breaking, unless you upgrade from v0.22."
That was the original intention, yes. I can reproduce the difference between v0.21.1 and v0.23 and I agree that this is a breaking change (by accident).
With regards to the migration and as a workaround: The fundamental idea of v0.23 is that each pytest scope (session, package, module, class, and function) provides a separate event loop. You can decide for each test in which loop they run via the new scope kwarg to the asyncio mark.
@tkukushkin In your specific example you want to run the test in the same loop as the fixture. The some_async_fixture fixture has session scope. In order to achieve your goal, you should mark test_something accordingly:
@pytest.mark.asyncio(scope="session")
async def test_something(some_async_fixture):
assert asyncio.get_running_loop() is some_async_fixtureSee also the part about asyncio event loops in the Concepts section of the docs.
@redtomato74 I'd like to hear more about your use case for two different event loops. I suggest you open a separate issue for this.
@seifertm I'd like to have only one loop for all fixtures and tests, without additional decorators to all tests and fixtures. We have thousands of tests in hundreds of services where all tests and fixtures share one loop and it is crucial for them. Is there any workaround to emulate the old behaviour?
This way https://pytest-asyncio.readthedocs.io/en/v0.23.2/how-to-guides/run_session_tests_in_same_loop.html does not consider fixtures at all(
I'd like to have only one loop for all fixtures and tests, without additional decorators to all tests and fixtures.
The linked how-to is supposed to make it easy to add the asyncio mark to all of your tests, specifically for large test suites where marking all packages or modules is cumbersome. Unfortunately, there seem to be problems with it, too. Please refer to #705
We have thousands of tests in hundreds of services where all tests and fixtures share one loop and it is crucial for them.
I don't think this use case was considered during the development of v0.22 and v0.23. Can you explain why you need all tests and fixtures to run the same loop? Why
This way https://pytest-asyncio.readthedocs.io/en/v0.23.2/how-to-guides/run_session_tests_in_same_loop.html does not consider fixtures at all(
That's a fair point. The docs should be updated to explain how fixtures are run.
Fixtures in v0.23 generally behave like tests and choose the event loop of the fixture scope. That means if a fixture has session scope it will run in the session-wide loop.
I cannot think of a workaround to switch to the old behaviour at the moment. I suggest pinning pytest-asyncio to <0.23 until this issue is fixed.
Fixtures in v0.23 generally behave like tests and choose the event loop of the fixture scope. That means if a fixture has session scope it will run in the session-wide loop.
Sorry, but I don't understand, I have not only session scoped fixtures but also module scoped fixtures, and they should use the same event loop as session scoped fixtures. Could you please describe how to achieve it?
We have thousands of tests in hundreds of services where all tests and fixtures share one loop and it is crucial for them.
I don't think this use case was considered during the development of v0.22 and v0.23. Can you explain why you need all tests and fixtures to run the same loop?
We write blackbox tests for microservices using pytest and pytest-asyncio. Some session scoped fixtures for example create database connection pool, which all tests can use to check database state. Another session scoped fixture in background monitors logs of subprocesses (instances of application, that we test) and captures these logs to some list, which tests can check. These subprocesses can be started by any fixture and test as async context manager. And obviously, subprocess (asyncio.subprocess) should be started with the same loop as fixture that captures logs from it. And we have way more such examples.
Thanks for the explanation!
Fixtures in v0.23 generally behave like tests and choose the event loop of the fixture scope. That means if a fixture has session scope it will run in the session-wide loop.
Sorry, but I don't understand, I have not only session scoped fixtures but also module scoped fixtures, and they should use the same event loop as session scoped fixtures. Could you please describe how to achieve it?
This is what I meant when I said your use case hasn't been considered in the development. There's currently no way to control the event loop used by a fixture independently from the fixture scope.
The v0.23 release will not work for your test suite. I suggest that you downgrade to v0.21.1.
I'll think of a way to control the fixture scope independently of the event loop scope.
Yes, we have already downgraded to 0.21.1. I don't think it's gonna be a problem for us for a long time (at least until Python 3.13).
I'll think of a way to control the fixture scope independently of the event loop scope.
Thank you! Looking forward to the news.
After upgrade to v 0.23 error in teardown:
@pytest.fixture
def server():
from main import _create_fastapy_server
app = _create_fastapy_server()
return app
@pytest_asyncio.fixture
async def client_async(server):
app = server
async with (
app.router.lifespan_context(app),
AsyncClient(app=app, base_url="http://testserver") as client
):
yield client
@pytest.mark.asyncio
async def test_server(client_async):
"""Start - Stop"""Output:
tests/test_server.py:63 (test_server)
def finalizer() -> None:
"""Yield again, to finalize."""
async def async_finalizer() -> None:
try:
await gen_obj.__anext__()
except StopAsyncIteration:
pass
else:
msg = "Async generator fixture didn't stop."
msg += "Yield only once."
raise ValueError(msg)
> event_loop.run_until_complete(async_finalizer())
/home/tonal/.pyenv/versions/epool-smsserver/lib/python3.11/site-packages/pytest_asyncio/plugin.py:336:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/home/tonal/.pyenv/versions/3.11.6/lib/python3.11/asyncio/base_events.py:628: in run_until_complete
self._check_closed()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_UnixSelectorEventLoop running=False closed=True debug=True>
def _check_closed(self):
if self._closed:
> raise RuntimeError('Event loop is closed')
E RuntimeError: Event loop is closed
/home/tonal/.pyenv/versions/3.11.6/lib/python3.11/asyncio/base_events.py:519: RuntimeError
sys:1: RuntimeWarning: coroutine '_wrap_asyncgen_fixture.<locals>._asyncgen_fixture_wrapper.<locals>.finalizer.<locals>.async_finalizer' was never awaited
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
/home/tonal/.pyenv/versions/epool-smsserver/lib/python3.11/site-packages/aiohttp/client.py:357: ResourceWarning: Unclosed client session <aiohttp.client.ClientSession object at 0x7fbfa55246f0>
_warnings.warn(
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/home/tonal/.pyenv/versions/epool-smsserver/lib/python3.11/site-packages/aiohttp/connector.py:277: ResourceWarning: Unclosed connector <aiohttp.connector.TCPConnector object at 0x7fbfa5b67ad0>
_warnings.warn(f"Unclosed connector {self!r}", ResourceWarning, **kwargs)
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/home/tonal/.pyenv/versions/3.11.6/lib/python3.11/asyncio/sslproto.py:119: ResourceWarning: unclosed transport <asyncio._SSLProtocolTransport object>
_warnings.warn(
/home/tonal/.pyenv/versions/3.11.6/lib/python3.11/asyncio/selector_events.py:864: ResourceWarning: unclosed transport <_SelectorSocketTransport fd=17>
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
ResourceWarning: Enable tracemalloc to get the object allocation traceback
@seifertm while I deeply appreciate your work, which is crucial for all python and pytest users dealing with async tests, I don't understand why you had to change the way the event loop is set up: the previous way worked just fine.
Please consider restoring the previous behaviour.
As for use cases, a few people provided their setups and needs in #657 (and this was my comment)
@ffissore Thanks for the kind words and for being so upfront. I'm generally open to restoring the previous behavior, if the existing problems with it can be solved in another way. Before I give a more extensive answer:
Do you take an issue with the bugs and the incompatibilities that were (involuntatrily) introduced in v0.23? Or do you think the new approach is generally flawed?
I did not follow exactly the issue, but I wanted to mention that 0.23 broke all our pipelines at work with some strange errors and the same with one of the open source project I maintain: https://github.com/FreeOpcUa/opcua-asyncio . The solution so far has been to revert to 0.21 everywhere
I'm afraid I'm not in a position to judge the approach: I don't know enough about the previous design and the desired long-term design.
I can see how switching from event loop to even loop policy still allows folks to switch to alternative event loop implementations (we, for example, use uvloop) but also how it makes it harder to define custom scopes, or even different scopes depending on the test or test package.
IMHO the best solution is the one that makes the end developer write as little code as possible, and only code that is strictly related to what the dev wants to do.
With this in mind, overriding the event_loop fixture was good enough: it's 5 lines of code written in the proper conftest file. Markers instead are not good, as it's lots of boilerplate for specifying the same setting all over a test suite: makers should better be used to specify exceptions, not rules.
It was good enough but a terrible DX honestly. It took me quite a while to figure out the first time I've seen this problem (and thought it was a bug or very bad design). If the goal here is that we don't need these five lines anymore, I'm all in and will just pin my dependency until we update our code. Let's not revert to something ugly if the new approach is better.
Also, the semantic versioning meaning of 0.* versions is that they can introduce breaking changes anytime. If you don't want to hit this kind of breaking changes in the first place, just pin your dependencies...
I think the real problem is that the migration process is not clear. I would have expected that removing the event_loop fixture would be enough with asyncio_mode = auto, but it's not.
Here is a minimal reproducible example (with GitHub CI!) of a real world application that I can't migrate to 0.23 so far: https://github.com/ramnes/pytest-asyncio-706
@seifertm Any hint on how something like this should be migrated? (PR welcome on that repository.) If it's not possible to migrate, then this would be the real issue: it wouldn't be a breaking change but a loss of functionality. Otherwise we'll probably have a few bits to add to the documentation here. :)
I did not follow exactly the issue, but I wanted to mention that 0.23 broke all our pipelines at work with some strange errors and the same with one of the open source project I maintain: https://github.com/FreeOpcUa/opcua-asyncio . The solution so far has been to revert to 0.21 everywhere
@oroulet Your issue seems to be the same as the OP's: pytest-asyncio falsely assumes that the scope of a fixture is tied to the scope of the event loop in which it should run. This results in fixtures to run in a different loop than tests and breaks the pytest run.
Hello. These breaking changes are really annoying.
=============================== warnings summary ===============================
tests/test_orders.py::test_order_tracking_db[29556]
/home/***/***/***/***/.venv/lib/python3.12/site-packages/pytest_asyncio/plugin.py:647: DeprecationWarning: There is no current event loop
old_loop = asyncio.get_event_loop()
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
========================= 2 passed, 1 warning in 1.77s =========================
Finished running tests!
Anyway test is not failed and async code was executed successfully. I suggest to include gists for cases of using ORMs in tests, because there are many approaches to achieve tearUp and tearDown, e.g.:
@pytest.fixture(scope="session")
def event_loop_policy():
return uvloop.EventLoopPolicy()
@pytest.fixture(scope="session")
def db_engine():
settings = DBSettings() # type:ignore
engine = create_async_engine(
f"{str(settings.dsn)}",
pool_pre_ping=True,
pool_size=10,
max_overflow=50,
connect_args={
"server_settings": {"jit": "off"},
},
echo=True,
isolation_level="READ_COMMITTED",
pool_recycle=600,
)
yield engine
engine.sync_engine.dispose()
@pytest_asyncio.fixture(scope="session")
async def db_session(db_engine):
async with AsyncSession(bind=db_engine) as session:
yield session
await session.rollback()
await session.close()
@pytest_asyncio.fixture(scope="session")
async def client(db_session):
def _get_db_override():
return db_session
app = app_factory()
app.dependency_overrides[connection] = _get_db_override
async with AsyncClient(app=app, base_url="http://test") as client:
try:
yield client
finally:
await client.aclose()
It works as expected, but with warnings. I have seen approaches with
@pytest_asyncio.fixture(scope="function")
async def db_session(db_engine):
that causes errors if used with parametrize functions.
Package is a 'must have' if working with asyncio in my opinion. Make it easier to use, please.
I don't know if it's related, but I have some tests that interact with twisted.internet.asyncioreactor.AsyncioSelectorReactor. I have another set of tests that require pytest-asyncio.
To avoid "RuntimeError: Event loop is closed", I need to use pytest-order to ensure that the Twisted tests run first. Otherwise, it seems like pytest-asyncio plays around with the event loop when running the other set of tests and closes the event loop, such that the Twisted tests would fail if run last ("RuntimeError: Event loop is closed").
IMHO the best solution is the one that makes the end developer write as little code as possible, and only code that is strictly related to what the dev wants to do. With this in mind, overriding the
event_loopfixture was good enough: it's 5 lines of code written in the properconftestfile. Markers instead are not good, as it's lots of boilerplate for specifying the same setting all over a test suite: makers should better be used to specify exceptions, not rules.
@ffissore You have a valid point here. To alleviate the boilerplate of adding markers to every test case, pytest-asyncio provides something like a "default marker approach" via a conftest.py [0]
Pytest also has pytestmark, which allows applying markers to all tests in a module or class.
I promised to get back to you regarding the reasoning why the event_loop fixture overrides have been deprecated. In short: pytest-asyncio is forced to change significantly, for example by the pending deprecation of asyncio.get_event_loop. These things are reasonably hard to change, when there's a defined API. However, some users override the event_loop and add all sorts of other functionality to it. This causes a great amount of additional failure modes and make it much harder to perform the necessary changes. I wrote more extensively about the reasoning in this comment.
That said, I'm still convinced the deprecation of loop overrides is the only "way out". I'm also committed to provide an appropriate migration path to all users and I'm very open to suggestions for improvement.
[0] Unfortunately, this mechanism is currently broken (see #705)
I think the real problem is that the migration process is not clear. I would have expected that removing the
event_loopfixture would be enough withasyncio_mode = auto, but it's not.Here is a minimal reproducible example (with GitHub CI!) of a real world application that I can't migrate to 0.23 so far: https://github.com/ramnes/pytest-asyncio-706
@seifertm Any hint on how something like this should be migrated? (PR welcome on that repository.) If it's not possible to migrate, then this would be the real issue: it wouldn't be a breaking change but a loss of functionality. Otherwise we'll probably have a few bits to add to the documentation here. :)
@ramnes Thanks for the example project. I think this is the migration experience we should be aiming for.
The issue is that fixture scope is currently bound to loop scope. That means a session-scoped fixture will always be executed in a session-scoped loop, which is apparently a blocker for many users.
What does everyone think about introducing a separate loop_scope argument to pytest_asyncio.fixture, in order to control the fixture scope and loop scope separately? For example:
# Fixture runs once per session, but runs in the default event loop (=function-scoped loop)
@pytest_asyncio.fixture(scope="session")
async def my_fixture():
...# Fixture runs once per session and runs in a session-scoped loop
@pytest_asyncio.fixture(scope="session", loop_scope="session")
async def my_fixture():
...I don't know if it's related, but I have some tests that interact with
twisted.internet.asyncioreactor.AsyncioSelectorReactor. I have another set of tests that require pytest-asyncio.To avoid "RuntimeError: Event loop is closed", I need to use pytest-order to ensure that the Twisted tests run first. Otherwise, it seems like pytest-asyncio plays around with the event loop when running the other set of tests and closes the event loop, such that the Twisted tests would fail if run last ("RuntimeError: Event loop is closed").
@jpmckinney This sounds like it's caused by something else. Is it possible for you to provide a small example that reproduces the error and file a separate issue for that?
What does everyone think about introducing a separate
loop_scopeargument topytest_asyncio.fixture, in order to control the fixture scope and loop scope separately?
No objections. This is what we're currently doing manually.
@seifertm I think it's indeed unrelated. Using twisted.internet.asyncioreactor.AsyncioSelectorReactor was causing some other issues, so I abandoned it in favor of the default (select) reactor โ and pytest-asyncio is working fine.
hi @seifertm , thank you for your reply and references.
If I got this right, pytest-asyncio 0.21.1 is perfect (get_event_loop deprecation aside), but you're concerned by the mess folks do in an overridden event_loop fixture, you want to prevent them from shooting themselves in the foot, and would rather give them specific interfaces for fiddling with low-level stuff.
I'll say something that might sound counter-intuitive: I don't think you should care.
I think this library should perfectly do what it does, document where and how it can be customised, and point folks to docs when they abuse certain fixtures or fiddle with the library internals.
The documentation already provides a clear example of what is needed to customise the default event_loop fixture. If folks start adding tons of unnecessary stuff to it, it's not this library's fault, and it's not something this library should handle.
To the extreme that it should not be this library's responsibility to close an unclosed event loop: if anything, it should raise an exception when it catches that the event loop hasn't been closed.
If folks think it is necessary to do all the stuff they do in their event_loop fixtures, I think they'll find a way to do it regardless of the constraints imposed by the library. Especially with a language like python where monkey patching is so easy.
Fixing wrong customisations on behalf of devs makes your work harder, as the library starts becoming a knight fighting off dragons. And there will always be dragons.
Can you explain why you need all tests and fixtures to run the same loop?
Here's an example that's similar to our tests that I don't think can be upgraded to 0.23. Essentially, we have module-level async fixtures for setting up a relatively expensive Application, and function-scoped fixtures for small customizations to the app that are selected on a per-test basis. The function-scoped fixtures and tests need the same loop as the app because the application's loop must be running to handle the async operations in the function-scoped fixtures and the tests themselves.
import asyncio
import pytest
class App:
pass
@pytest.fixture(scope="module")
async def app():
# create expensive app, once per module
app = App()
app.event_loop = asyncio.get_running_loop()
yield app
# app cleanup
@pytest.fixture(scope="function")
async def app_plugin(app):
# enable a plugin (involves async operations on app's loop)
print("await enable_plugin(app)")
yield
print("await disable_plugin(app)")
assert app.event_loop.is_running()
@pytest.mark.asyncio(scope="module")
async def test_loops(app, app_plugin):
print("await async_app_operation")
assert app.event_loop is asyncio.get_running_loop()Both asserts fail because both the function-scoped fixture (expected) and the test (unexpected) run with the function_scoped event loop (@pytest.mark.asyncio(scope="module") is ignored on the test due to the inclusion of the function-scoped fixture, which I assume is not intended? If the function-scope fixture is removed, the module-scoped event loop is used.
I believe @seifertm's proposal of an explicit loop_scope decorator for async fixtures would address our case just fine.
If we wanted to workaround the issue today, I believe we could replace all of our function-scoped async fixtures with sync fixtures that use app.event_loop.run_until_complete(...).
What we really want is to recover the behavior of "everybody please use the module-level event loop", so whatever path to that behavior works best for pytest-asyncio ought to be fine for us. In our case, all of our function-level async fixtures depend on our module-level async fixture, so it is conceivable that the situation is detectable, but I'm sure an explicit decorator on every fixture is easier to handle, if a bit more tedious to use. A single flag for defining a lowest or default level of event-loop creation would be simpler for us to use (e.g. asyncio-loop-min-level = "module" would mean function-level fixtures and tests use module-level event loop by default but not session, etc.), if that's practical.
It also seems reasonable to me to expect a function-scoped fixture to run with the same loop as the function itself, so if I e.g. have @pytest.mark.asyncio(scope="module") on a test, it makes sense to use that scope (as the default, at least) on all function-scoped async fixtures used in the test. I'm not sure if that information is readily available at the right time, though, so I'd understand if it's not practical to implement.
What does everyone think about introducing a separate loop_scope argument to pytest_asyncio.fixture, in order to control the fixture scope and loop scope separately? For example:
@seifertm 2 questions about this proposal:
1st question:
Let's say that we have a test with 2 async fixtures with different loop_scope. Should I expect pytest-asyncio to raise an error? Let's say we have the following 2 fixtures:
@pytest_asyncio.fixture(scope="session", loop_scope="session")
async def database_connection():
....
@pytest_asyncio.fixture(scope="session") # << no loop_scope defined, so default=`function` is used
async def random_csv_file():
....
async def mytest(random_csv_file, database_connection):
...
In this case, the 2 fixtures are using 2 different loop scopes, but it should be possible to run the random_csv_file one with the session scope, right? When thinking of all different scopes (package/module/...) there seems to be some kind of compatibility/loop scope overrides that can be made, but seems like it should be investigated carefully.
2nd question:
Can we have some extra configuration for the mode=auto that also allows us to define the default loop_scopes?
As I think communication/documentation will be key to overcome this breaking change and it's fallout, I propose to add the url to the migration documentation to the deprecation warning, extending the migration documentation with patterns used.
Following is a pattern/MRE I use and I'd expect to be very useful to have documented for migrating to 0.23+.
Using the uvloop event loop policy to run hypercorn in asyncio event loop to serve fastapi.
pip install pytest pytest-asyncio hypercorn uvloop fastapi httpx
import asyncio
import random
from typing import Optional, List
import sys
import pytest
import pytest_asyncio
from hypercorn.asyncio import serve
from hypercorn.config import Config
import uvloop
from fastapi import FastAPI
import httpx
app = FastAPI(
version="1.0.0", title="pytest-dev/pytest-asyncio#706",
servers=[{"url": "/", "description": "Default, relative server"}]
)
@app.get("/random", operation_id="getRandom", response_model=List[int])
def getRandom(limit: Optional[int] = 3) -> List[int]:
return [random.randrange(0, 6) for _ in range(limit)]
@pytest.fixture(scope="session")
def event_loop(request):
loop = asyncio.get_event_loop_policy().new_event_loop()
yield loop
loop.close()
@pytest.fixture(scope="session")
def config(unused_tcp_port_factory):
c = Config()
c.bind = [f"localhost:{unused_tcp_port_factory()}"]
return c
@pytest_asyncio.fixture(scope="session")
async def server(event_loop, config):
policy = asyncio.get_event_loop_policy()
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
try:
sd = asyncio.Event()
task = event_loop.create_task(serve(app, config, shutdown_trigger=sd.wait))
yield config
finally:
sd.set()
await task
asyncio.set_event_loop_policy(policy)
class Client:
def __init__(self, url):
self.c = httpx.AsyncClient()
self.url = url
async def get(self, path):
return await self.c.get(f"{self.url}/{path}")
@pytest_asyncio.fixture(scope="session")
async def client(event_loop, server):
c = Client(f"http://{server.bind[0]}")
dd = await c.get("openapi.json")
return c
@pytest.mark.asyncio
async def test_getRandom(client):
r = await client.get("random")
assert r.status_code == 200
assert len(r.json()) == 3
@pytest.mark.asyncio
@pytest.mark.skipif(sys.version_info < (3, 9), reason="requires asyncio.to_thread")
async def test_to_thread(client):
r = await asyncio.to_thread(httpx.get, f"{client.url}/openapi.json")
assert r.status_code == 200I'm glad to see more people are experiencing this issue. Apologies, but the latest version is an absolute nightmare. After 5 hours of trying to adjust my environment I still can't get it working. I have downgraded to pytest-asyncio 0.21.1 and will try to hold on to it as long as possible. I hope there will be some future changes to make this migration easier, but for now it's a huge headache.
We also needed to downgrade to 0.21.1 to make our project work. Waiting for a new update. Thanks!
I was afraid this day would come. Thanks for keeping 0.21.1 alive.
pytest-asyncio==0.21.2 is now available and works well with pytest==8.2.0
pytest-asyncio==0.21.2is now available and works well withpytest==8.2.0
Epic-quick fix. Many thanks @ffissore .
If you have overwritten the event_loop fixture we have resolved the issue by using get_event_loop() instead of new_event_loop(). pytest-asyncio v0.23.7 and pytest v8.2.1 working like this. Maybe it helps some of you.
@pytest.fixture(scope="session")
def event_loop(request):
loop = asyncio.get_event_loop_policy().get_event_loop()
yield loop
loop.close()
What's the blessed new way to call loop.set_debug(True)?
What's the blessed new way to call
loop.set_debug(True)?
Some options come to my mind:
asyncio.get_running_loop().set_debug(True)- Setting the env var
PYTHONASYCNCIODEBUG=1 - Running Python in development mode with the
-Xoption
Do they work for your use case?
What's the blessed new way to call
loop.set_debug(True)?Some options come to my mind:
asyncio.get_running_loop().set_debug(True)
Where, though?
@dada-engineer your suggestion produces
File "/Users/tamird/Library/Caches/pypoetry/virtualenvs/common-hf-Ms37h-py3.12/lib/python3.12/site-packages/pytest_asyncio/plugin.py", line 342, in finalizer
event_loop.run_until_complete(async_finalizer())
File "/Users/tamird/.pyenv/versions/3.12.4/lib/python3.12/asyncio/base_events.py", line 662, in run_until_complete
self._check_closed()
File "/Users/tamird/.pyenv/versions/3.12.4/lib/python3.12/asyncio/base_events.py", line 541, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
@seifertm any ideas?
Yes sorry my confusion. This is actually also the original error. I had this as well but could sort it out. This happens when you use a differently scoped fixture as a dependency in a lower scoped fixture I guess.
Edit: or I get that confused again and we had this with moto server and aiobotocore mocking. Was the only issue we had then. Would need to see the code in detail sorry
Hello! Something has been broken with the latest pytest-asyncio releases.
Consider such code:
import asyncio import pytest @pytest.fixture(scope='session') def event_loop(): loop = asyncio.get_event_loop_policy().new_event_loop() yield loop loop.close() @pytest.fixture(scope='session') async def some_async_fixture(): return asyncio.get_running_loop() async def test_something(some_async_fixture): assert asyncio.get_running_loop() is some_async_fixturepytest.ini:
[pytest] asyncio_mode = autoThis test passes with pytest-asyncio 0.21.1 but fails with 0.23.0. I am not sure if it's ok, but if it is, IMHO it is breaking change.
This worked for me. Thanks a lot
@tkukushkin
#871 contains a preliminary patch for separating the caching and event loop scopes of async fixtures. This should address the main point of this issue. I also went through the comments again and tried to include additional requests, such as setting the default event loop scope for async fixtures via a config setting.
@ramnes provided an example project which did not upgrade properly to pytest-asyncio v0.23. Using the patch, the migration effort was reduced to a 2-line change (see ramnes/pytest-asyncio-706#1).
As I see it, the path forward is to finish the PR, especially documentation updates, and create a release candidate that users affected by this issue can try in their sepecific projects.
#871 worked for me:
- Add
asyncio_default_fixture_loop_scope=sessionto pytest.ini - Change event loop fixture to:
@pytest.fixture(scope="session")
def event_loop():
"""Needed for https://github.com/igortg/pytest-async-sqlalchemy"""
loop = asyncio.get_event_loop_policy().get_event_loop()
yield loop
loop.close()
Edit: For clarification, I am doing this because it is required to use https://github.com/igortg/pytest-async-sqlalchemy
#871 seems to be working for me too - running everything with session scope. Rather than overriding the event_loop fixture like @jaxor24 did, I followed the instructions in https://pytest-asyncio.readthedocs.io/en/latest/how-to-guides/run_session_tests_in_same_loop.html.
Incidentally, asyncio_default_fixture_loop_scope is much appreciated over having to annotate every asynchronous fixture. It might be nice to have something similar to change the default test loop scope, so that it can be done with one line instead of all the incantations in the How-to (e.g. https://pytest-asyncio.readthedocs.io/en/latest/how-to-guides/run_session_tests_in_same_loop.html).
FYI decorating async fixtures and test functions is not required: use "auto mode" https://pytest-asyncio.readthedocs.io/en/v0.21.1/concepts.html#auto-mode
# pytest.ini
[pytest]
asyncio_mode = auto
Thanks for the early feedback! There's now a pre-release version (pytest-asyncio v0.24.0a0), which supports separate loop and caching scopes for async fixtures. The docs were updated correspondingly.
Any feedback is much appreciated!
Incidentally,
asyncio_default_fixture_loop_scopeis much appreciated over having to annotate every asynchronous fixture. It might be nice to have something similar to change the default test loop scope, so that it can be done with one line instead of all the incantations in the How-to (e.g. https://pytest-asyncio.readthedocs.io/en/latest/how-to-guides/run_session_tests_in_same_loop.html).
@bmerry Agreed! A configuration option for default loop scope is tracked in #793
I ran into this issue today working on a Quart project. Pinning 0.21.2 and adding a session-scoped event_loop fixture worked for the fixture scope problems, but I noticed another weird issue. Using an in-memory sqlite3 db via async sqlalchemy that is populated (tables created) in the create_app quart factory function works fine for the first test file, but it gives me errors about tables not existing in the second test file. Using a file for the db doesn't have these issues, so this tells me that the session-scoped async fixture that constructs the application (and the db connection) is somehow getting GCd in between test files. Maybe it's getting wrapped in a task without a reference? Tasks are only weakref'd in asyncio IIRC. Another weird thing is that changing the scope to "module" doesn't fix the problem, but I would think this would cause the create_app factory to get re-run for each test file and thus recreate the in-memory db.
I think this is a separate issue since it also happens on 0.21.2, so I will open another issue for this with a minimal repro this weekend, but I had a question related to this thread as well, so I'm posting this here for now. :)
This might be better as a discussion, so feel free to flag as off-topic, but I was just wondering what the motivation for all this explicit event loop management is?
I'm working on a talk about asyncio and websockets for my local Python user-group, and I would like to understand the way that this library wraps async stuff for pytest a bit more, but it seems like the explicit event loop management is unnecessary when asyncio provides the get_event_loop function.
I've never had to do any explicit event loop management in my projects since I mostly use asyncio in applications where I can rely on async/await, so I'm pretty inexperienced with the lower-level APIs.
Is it to isolate concurrent test setup/code? Wouldn't it be sufficient to simply collect all of the async test cases and fixtures and await them in a sync for-loop? Something like:
async def one():
await asyncio.sleep(1)
print("one second")
assert False
async def two():
await asyncio.sleep(2)
print("two seconds")
assert False
async def three():
asyncio.sleep(3)
print("three seconds")
assert False
async def main():
coroutines = [one(), two(), three()]
for c in coroutines:
fut = asyncio.create_task(c)
try:
await fut
except AssertionError:
print("test failed")
# pytest failure reporting stuff.
asyncio.run(main())I'm guessing that the reason for all of this has to do with integrating with pytest itself since pytest does a lot of meta-programming for test collection and fixture setup and stuff, so it's not as simple as writing a sync wrapper function around the async test cases, but pytest doesn't do anything async, so it still seems like a lot of extra maintenance burden on the pytest-asyncio authors to manage event loop scopes when simply running the test functions in whatever asyncio.get_event_loop returns should be sufficient.
That would also let the user override the event loop if they need more control.
I'm sure this wouldn't work, but something like:
def use_custom_loop(custom_loop):
def outer(func):
@functools.wraps(func)
async def inner(*args, **kwargs):
current_loop = asyncio.get_event_loop()
asyncio.set_event_loop(custom_loop)
result = await func(*args, **kwargs)
asyncio.set_event_loop(current_loop)
return result
return inner
return outerFeels like it should be usable.
I want to be clear that I'm not criticizing here. I really appreciate this library and all the work the maintainers put in since I do tons of async code in Python and JS. I just want to understand the motivation behind this design since from my grug-brain perspective, it seems like just using whatever loop Python gives us would be a lot simpler.
@GrammAcc Honestly, I don't think I did a very good job explaining why the changes to pytest-asyncio are needed. Therefore, I think you questions are very relevant to this issue and I'll gladly try again to share the motivation behind the changes. After that, I'll share my thoughts on your code example.
- A bunch of the low-level asyncio functionality is being deprecated, such as asyncio.get_event_loop(). Upstream is also leaning towards deprecating asyncio.set_event_loop(). The policy system is being deprecated, too. While it will still be possible to use
asyncio.new_event_loopto request a new loop, pytest-asyncio heavily relies onget_event_loopand its side-effect of creating a new loop when no loop has been set. That means something has to change in pytest-asyncio or it will break in future Python versions. - The current practice of overriding the
event_loopfixture was a frequent source of head ache and subsequent bug reports. Since the fixture was a "free-form" function where you could write all kinds of code, the fixture was sometimes abused to contain other kinds of setup code. - The fact that nearly all users of pytest-asyncio had a custom copy of the event_loop fixture, means that all the burden of moving to newer Python versions is on the users. It also means that it's practically impossible for pytest-asyncio to make non-breaking changes the event_loop fixture or that those changes had no effect. However, some of these changes are required to fix point 1.
That's why pytest-asyncio tried to come up with a new way of having event loops with different scopes without requiring a carbon copy of the event_loop fixture implementation.
Is it to isolate concurrent test setup/code? Wouldn't it be sufficient to simply collect all of the async test cases and fixtures and await them in a sync for-loop?
Depending on the kind of testing you do, it may be desirable that each test case in your test suite is isolated from the other tests. Your specific example runs all test cases in a single asyncio event loop. That means they can potentially influence each other, for example through race conditions from background tasks or context variables. Pytest-asyncio, on the other hand, runs each async test case in an isolated asyncio event loop by default, in an effort to avoid these kinds of pitfalls.
The goal is definitely to have a smooth development experience where you don't have to call low-level asyncio functions. Currently, pytest-asyncio is in a bit of a in-between state, because its trying to provide a migration path for all users towards a "no event_loop fixture overrides" state. Once this is done, a lot of internal refactorings should be unblocked, which should solve some longstanding bugs, such as #127.
I hope this answers your questions! If not, feel free to follow up or to reach out otherwise :)
@seifertm Thank you for your detailed response! That makes perfect sense, and I had no idea upstream was deprecating {get|set}_event_loop.
I definitely agree that managed event loops are better than the overridable fixture in this case, but one thing I will suggest is that I think it's okay to let the user deal with race conditions in the test suite on their own. :)
I've written automated tests in several different environments (Python, NodeJS, Elixir, C++), and one thing that always comes up is that some tests simply need to access some kind of shared state (a test db for example). In a well-structured test suite, most tests will be completely isolated, but eventually, we have to actually test the real behavior of the application, and that usually involves shared state and side effects. How the programmer resolves that is usually application-dependent. For example, in one Node project I worked on, I wrote some setup code to essentially create dynamic per-thread postgres databases (yes an entire db for each thread lol) since it made sense for our environment. Different dev machines and CI VMs had different resource limitations, the ORM and web framework we were using made it difficult to isolate the db that different services used, and the full test suite needed to run in <30 seconds or so. That's not a great way to handle race conditions in db tests, but mocking wasn't an option in this case, and even though the test runner isolated tests into their own threads, the db is still a giant blob of shared mutable state, so I had to work around it somehow.
FWIW, isolating tests into their own event loop also doesn't solve these problems with race conditions accessing a test db, and it seems to make it harder for the user to implement an application-appropriate solution as well since the behavior of a shared fixture/resource (with locking or whatever you decide to use to synchronize it) gives unexpected results due to the event loop isolation.
I think the asyncio_default_fixture_loop_scope setting is probably sufficient to address these kinds of issues, but I just wanted to give my two cents on that part of it since I think test isolation is an extra burden that the pytest-asyncio maintainers can probably leave to the users. As long as the behavior of the async test cases matches the expectation of the users (e.g. test cases are executed sequentially, but any shared state between them needs to be synchronized by the user), then I think you're probably safe to leave that to us. :)
I guess what I'm trying to articulate is that it's probably okay to focus on making the library easy to maintain and refactor and not worry about test isolation as long as test cases don't run concurrently.
At the end of the day, most async applications already have to deal with the potential for race conditions, so the application developer will usually have an idea of what will cause concurrency problems in the test suite, and if they don't, that's something that they have to learn eventually. :)
Anyway, just my two cents on the topic.
I appreciate everything you do here, and thank you again for taking the time to give such a detailed and thoughtful response!
@seifertm Thank you for your detailed response! That makes perfect sense, and I had no idea upstream was deprecating
{get|set}_event_loop.I definitely agree that managed event loops are better than the overridable fixture in this case, but one thing I will suggest is that I think it's okay to let the user deal with race conditions in the test suite on their own. :)
I've written automated tests in several different environments (Python, NodeJS, Elixir, C++), and one thing that always comes up is that some tests simply need to access some kind of shared state (a test db for example). In a well-structured test suite, most tests will be completely isolated, but eventually, we have to actually test the real behavior of the application, and that usually involves shared state and side effects. How the programmer resolves that is usually application-dependent. For example, in one Node project I worked on, I wrote some setup code to essentially create dynamic per-thread postgres databases (yes an entire db for each thread lol) since it made sense for our environment. Different dev machines and CI VMs had different resource limitations, the ORM and web framework we were using made it difficult to isolate the db that different services used, and the full test suite needed to run in <30 seconds or so. That's not a great way to handle race conditions in db tests, but mocking wasn't an option in this case, and even though the test runner isolated tests into their own threads, the db is still a giant blob of shared mutable state, so I had to work around it somehow.
FWIW, isolating tests into their own event loop also doesn't solve these problems with race conditions accessing a test db, and it seems to make it harder for the user to implement an application-appropriate solution as well since the behavior of a shared fixture/resource (with locking or whatever you decide to use to synchronize it) gives unexpected results due to the event loop isolation.
I think the
asyncio_default_fixture_loop_scopesetting is probably sufficient to address these kinds of issues, but I just wanted to give my two cents on that part of it since I think test isolation is an extra burden that the pytest-asyncio maintainers can probably leave to the users. As long as the behavior of the async test cases matches the expectation of the users (e.g. test cases are executed sequentially, but any shared state between them needs to be synchronized by the user), then I think you're probably safe to leave that to us. :)I guess what I'm trying to articulate is that it's probably okay to focus on making the library easy to maintain and refactor and not worry about test isolation as long as test cases don't run concurrently.
At the end of the day, most async applications already have to deal with the potential for race conditions, so the application developer will usually have an idea of what will cause concurrency problems in the test suite, and if they don't, that's something that they have to learn eventually. :)
Anyway, just my two cents on the topic.
I appreciate everything you do here, and thank you again for taking the time to give such a detailed and thoughtful response!
I agree 100% with this point. While it's important to give beginners a default, sane behavior out of the box, it's also important to remember that beginners will usually write simpler code.
As long as the simple use-cases will be taken care of, I think it's ok to give advanced users the responsibility for the rest. Especially if it gives you an easier time maintaining the code and adding features.
In my case, I'd really hope for an opt-in way to execute test cases concurrently sometime in the future (I can already think of tons of edge-cases and bugs it may cause to my tests but as said before, that's something I'm willing to have the responsibility to solve in my own code).
Thank you so much for the transparency and in general for the entire work being done here. My team and I have managed to save so much time by utilizing concurrency in our tests!
@GrammAcc @UltimateLobster Your input is much appreciated. The same sentiment was already echoed previously by ffissore in #706 (comment). Maybe pytest-asyncio is trying to do too much. I agree that software can be a pain if it tries to be smarter than the user.
I don't think this is a discussion we should have right now, though. The goal of the releases since v0.21 was to make it easier for pytest-asyncio to evolve by "internalizing" the event_loop fixture. At the same time, pytest-asyncio wanted to provide a migration path for existing users. Admittedly, this introduced additional complexity, but the releases didn't try to add convenience features to outsmart the developer. Once the transition is complete, we should revisit this topic and see if and how we can simplify the functionality provided by the library.
That said, I hope that v0.24 resolves this long-standing issue. Thanks to all the participants in the discussion. GitHub is currently the only channel for pytest-asyncio maintainers to understand what users really want.
Sorry for necrobumbing this one. I just wanted to let the maintainers know that I mentioned a possible bug in #706 (comment), but there's nothing in pytest-asyncio. It was a bug with how I was connecting to different DBs when spawning servers in the test suite. I was just chasing a red herring with the event loop stuff. :)
I was busy preparing for a talk at my local user group, and I forgot to report back here.
Sorry for the confusion, and thanks again for such an awesome tool!
Hi @seifertm,
Unfortunately I don't think the changes in 0.24 fix my use case that has been broken since 0.21.1.
@pytest_asyncio.fixture(scope="session")
async def db_schema(db_engine):
log.info("populating db schema")
# create schema in database
async with db_engine.begin() as conn:
await conn.run_sync(__execute_upgrade)
log.info("Schema created")
return db_engine
@pytest_asyncio.fixture(scope="session")
async def db_connection(db_schema):
# connect to the database
connection = await db_schema.connect()
# return connection to the Engine
yield connection
await connection.close()
@pytest_asyncio.fixture(scope="function")
async def db_session(db_connection):
"""Create a fixture for a DB Session
The idea of this fixture is to allow tests to use the same DB connection, the transaction from which will get rolled back after each test, leaving the DB in the clean state.
This fixture should be function scope, runing for each test, but should use the event loop from the session fixture it depends on.
Args:
db_connection (_type_): _description_
Returns:
_type_: _description_
"""
# begin a non-ORM transaction
trans = db_connection.begin()
await trans.start()
# bind an individual Session to the connection
session = AsyncSession(bind=db_connection, expire_on_commit=False)
yield session
await session.close()
# rollback - everything that happens within the
# Session below (including calls to commit())
# is rolled back.
await trans.rollback()
@pytest.mark.asyncio(loop_scope="session")
async def test_something(db_session):
# I want this test to use the event loop from the session fixture
pass
Running the test gives me this error:
raise MultipleEventLoopsRequestedError(
E pytest_asyncio.plugin.MultipleEventLoopsRequestedError: Multiple asyncio event loops with different scopes have been requested
E by test/test_me.py::test_something. The test explicitly requests the event_loop fixture, while
E another event loop with session scope is provided by .
E Remove "event_loop" from the requested fixture in your test to run the test
E in a session-scoped event loop or remove the scope argument from the "asyncio"
E mark to run the test in a function-scoped event loop.
@greemo I cannot comment on the error message, but your use case should be supported.
Starting with v0.24 fixtures can specify different scopes for caching (scope) and for the event_loop (fixture_scope). Try add loop_scope="session" to all fixture decorators in your example. @pytest_asyncio.fixture(loop_scope="session", scope="function") should give the desired behavior for db_session.
As I think communication/documentation will be key to overcome this breaking change and it's fallout,
Using the example from
#706 (comment)
code for 0.24 is
import asyncio
import random
from typing import Optional, List
import sys
import inspect
import pytest
import pytest_asyncio
from hypercorn.asyncio import serve
from hypercorn.config import Config
import uvloop
from fastapi import FastAPI
import httpx
app = FastAPI(
version="1.0.0", title="pytest-dev/pytest-asyncio#706",
servers=[{"url": "/", "description": "Default, relative server"}]
)
@app.get("/random", operation_id="getRandom", response_model=List[int])
def getRandom(limit: Optional[int] = 3) -> List[int]:
return [random.randrange(0, 6) for _ in range(limit)]
@pytest.fixture(scope="session")
def config(unused_tcp_port_factory):
c = Config()
c.bind = [f"localhost:{unused_tcp_port_factory()}"]
return c
@pytest_asyncio.fixture(loop_scope="session")
async def server(config):
event_loop = asyncio.get_event_loop()
try:
sd = asyncio.Event()
task = event_loop.create_task(serve(app, config, shutdown_trigger=sd.wait))
yield config
finally:
sd.set()
await task
@pytest.fixture(scope="session")
def event_loop_policy():
return uvloop.EventLoopPolicy()
class Client:
def __init__(self, url):
self.c = httpx.AsyncClient()
self.url = url
async def get(self, path):
print(f"{__file__}:{inspect.currentframe().f_lineno} {id(asyncio.get_event_loop())=}")
return await self.c.get(f"{self.url}/{path}")
@pytest_asyncio.fixture(loop_scope="session")
async def client(server):
c = Client(f"http://{server.bind[0]}")
dd = await c.get("openapi.json")
return c
@pytest.mark.asyncio(loop_scope="session")
async def test_getRandom(client):
r = await client.get("random")
assert r.status_code == 200
assert len(r.json()) == 3
@pytest.mark.asyncio(loop_scope="session")
@pytest.mark.skipif(sys.version_info < (3, 9), reason="requires asyncio.to_thread")
async def test_to_thread(client, server):
r = await asyncio.to_thread(httpx.get, f"{client.url}/openapi.json")
assert r.status_code == 200delta
--- my_pytest_test.py 2024-09-18 16:07:19.630570449 +0200
+++ my_pytest_asyncio.py 2024-10-03 14:07:50.705652032 +0200
@@ -2,6 +2,7 @@
import random
from typing import Optional, List
import sys
+import inspect
import pytest
import pytest_asyncio
@@ -19,17 +20,11 @@
)
-@app.get("/random", operation_id="getRandom", response_model=list[int])
-def getRandom(limit: int | None = 3) -> list[int]:
+@app.get("/random", operation_id="getRandom", response_model=List[int])
+def getRandom(limit: Optional[int] = 3) -> List[int]:
return [random.randrange(0, 6) for _ in range(limit)]
-@pytest.fixture(scope="session")
-def event_loop(request):
- loop = asyncio.get_event_loop_policy().new_event_loop()
- yield loop
- loop.close()
-
@pytest.fixture(scope="session")
def config(unused_tcp_port_factory):
@@ -38,10 +33,9 @@
return c
-@pytest_asyncio.fixture(scope="session")
-async def server(event_loop, config):
- policy = asyncio.get_event_loop_policy()
- asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
+@pytest_asyncio.fixture(loop_scope="session")
+async def server(config):
+ event_loop = asyncio.get_event_loop()
try:
sd = asyncio.Event()
task = event_loop.create_task(serve(app, config, shutdown_trigger=sd.wait))
@@ -49,34 +43,38 @@
finally:
sd.set()
await task
- asyncio.set_event_loop_policy(policy)
+@pytest.fixture(scope="session")
+def event_loop_policy():
+ return uvloop.EventLoopPolicy()
+
class Client:
def __init__(self, url):
self.c = httpx.AsyncClient()
self.url = url
async def get(self, path):
+ print(f"{__file__}:{inspect.currentframe().f_lineno} {id(asyncio.get_event_loop())=}")
return await self.c.get(f"{self.url}/{path}")
-@pytest_asyncio.fixture(scope="session")
-async def client(event_loop, server):
+@pytest_asyncio.fixture(loop_scope="session")
+async def client(server):
c = Client(f"http://{server.bind[0]}")
dd = await c.get("openapi.json")
return c
-@pytest.mark.asyncio
+@pytest.mark.asyncio(loop_scope="session")
async def test_getRandom(client):
r = await client.get("random")
assert r.status_code == 200
assert len(r.json()) == 3
-@pytest.mark.asyncio
+@pytest.mark.asyncio(loop_scope="session")
@pytest.mark.skipif(sys.version_info < (3, 9), reason="requires asyncio.to_thread")
-async def test_to_thread(client):
+async def test_to_thread(client, server):
r = await asyncio.to_thread(httpx.get, f"{client.url}/openapi.json")
assert r.status_code == 200
No reason to be afraid of this any longer.