You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Task <Task pending name='Task-1' coro=<main() running at /.../test.py:100> cb=[_run_until_complete_cb() at /opt/homebrew/Cellar/python@3.9/3.9. 17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py:184]> got Future <Future pending cb=[shield.<locals>._outer_done_callback() at /opt/homebrew/ Cellar/python@3.9/3.9.17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/tasks.py:907]> attached to a different loop
The exception occurs here: aiohttp/connector.py:1145
The problem arises due to the use of the same session within which the aiohttp.TCPConnector is saved, which upon creation receives the current event_loop.
But synchronous and asynchronous requests are executed in different event_loops. And if the current event_loop does not match what it was when aiohttp.TCPConnector was created, we always get an exception.
The simplest solution is to set refresh to True in s3fs.core.S3FileSystem.set_session, but it seems that this is not the best solution.
The text was updated successfully, but these errors were encountered:
Yes, you are completely right: it is expected that the event loop is either in the same thread as the execution (in which case you use the async methods) or not (in which case you don't). The argument asynchronous= can be used to make this difference, as it will bust the caching mechanism; originally this argument made a real difference, but that difference has shrunk to nothing.
The simplest solution is to set refresh to True in s3fs.core.S3FileSystem.set_session
You should create two S3FileSystem instances, one in a coroutine for use with async, and one outside.
The session is always async, so that you can do bulk operations even in sync code. The difference is where the event loop is running, so trying to maintain multiple loops running in different threads within the one instance seems to me to be a bad idea.
Will lead to:
Task <Task pending name='Task-1' coro=<main() running at /.../test.py:100> cb=[_run_until_complete_cb() at /opt/homebrew/Cellar/python@3.9/3.9. 17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py:184]> got Future <Future pending cb=[shield.<locals>._outer_done_callback() at /opt/homebrew/ Cellar/python@3.9/3.9.17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/tasks.py:907]> attached to a different loop
The exception occurs here:
aiohttp/connector.py:1145
The problem arises due to the use of the same session within which the
aiohttp.TCPConnector
is saved, which upon creation receives the current event_loop.But synchronous and asynchronous requests are executed in different event_loops. And if the current event_loop does not match what it was when
aiohttp.TCPConnector
was created, we always get an exception.The simplest solution is to set
refresh
toTrue
ins3fs.core.S3FileSystem.set_session
, but it seems that this is not the best solution.The text was updated successfully, but these errors were encountered: