New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sanic drops part of HTTP response data #2921
Comments
I can confirm this problem and this is a very critical bug. |
I believe the issue lies within the close() method. Instead of directly calling self.abort(), it should be replaced with: timeout = self.app.config.GRACEFUL_SHUTDOWN_TIMEOUT
self.loop.call_later(timeout, self.abort) By making this adjustment, the connection will function correctly. Otherwise, nginx encounters an error stating upstream prematurely closed connection while reading upstream. |
I encountered the same issue. When utilizing the Python packages requests or aiohttp to send requests to a Sanic server, everything works fine. However, when using Nginx as a reverse proxy, Nginx reports an error: 'upstream prematurely closed connection while reading upstream.' Downgrading to sanic 23.6.0 resolves this error. |
This code in The graceful shutdown timeout normally has a different meaning in Sanic (roughly speaking: how long to wait for handler to finish), not that of just closing a TCP connection. Would need a bit deeper look at what exactly is being fixed here and what is to proper approach + write tests for those cases. Possibly related also to #2531 (Nginx failures). |
It appears that the incorrect behavior of my changes was due to my misunderstanding of the meaning of the graceful shutdown timeout. I tried to look deeper, but so far without success. The main problem here is that I use Keep Alive 99% of the time, so I haven't encountered this problem when testing my changes. As for now, I still cannot reproduce such problems with using keep-alive. |
@Tronic I think they are just using the shutdown timer here as a further delay of the response timeout.
@robd003 No. That is an incorrect use of graceful timeout. But also, I am not sure why we would want to further delay @xbeastx Shouldn't this just be easily solvable by increasing the response timeout? |
…ged how multiprocessing works. Python 3.11 is now required, old versions were causing unpredictability in tests. (Sanic does not yet support 3.12) Sanic has been upgraded to 23.6.0, which is the latest version that avoids this bug: sanic-org/sanic#2921 New strategy for multiprocessing is to create all multiprocessing tools in one process, then fork to other processes. The previous strategy was to declare multiprocessing tools at the top of every file, or wherever they were needed at import/creation. Now all multiprocessing tools are attached to the app.shared_ctx. This means `api_app` is imported in many, many places. This forced a change in how the DownloadManager works. Previously, it would continually run download workers which would pull downloads from a multiprocessing.Queue. Now, a single worker checks for new downloads and sends a Sanic signal. Flags have been reworked to use the `api_app`. I removed the `which` flag functionality because the `which` are called at import and needed their own multiprocessing.Event.
…ged how multiprocessing works. Python 3.11 is now required, old versions were causing unpredictability in tests. (Sanic does not yet support 3.12) Sanic has been upgraded to 23.6.0, which is the latest version that avoids this bug: sanic-org/sanic#2921 New strategy for multiprocessing is to create all multiprocessing tools in one process, then fork to other processes. The previous strategy was to declare multiprocessing tools at the top of every file, or wherever they were needed at import/creation. Now all multiprocessing tools are attached to the app.shared_ctx. This means `api_app` is imported in many, many places. This forced a change in how the DownloadManager works. Previously, it would continually run download workers which would pull downloads from a multiprocessing.Queue. Now, a single worker checks for new downloads and sends a Sanic signal. Flags have been reworked to use the `api_app`. I removed the `which` flag functionality because the `which` are called at import and needed their own multiprocessing.Event.
…ged how multiprocessing works. Python 3.11 is now required, old versions were causing unpredictability in tests. (Sanic does not yet support 3.12) Sanic has been upgraded to 23.6.0, which is the latest version that avoids this bug: sanic-org/sanic#2921 New strategy for multiprocessing is to create all multiprocessing tools in one process, then fork to other processes. The previous strategy was to declare multiprocessing tools at the top of every file, or wherever they were needed at import/creation. Now all multiprocessing tools are attached to the app.shared_ctx. This means `api_app` is imported in many, many places. This forced a change in how the DownloadManager works. Previously, it would continually run download workers which would pull downloads from a multiprocessing.Queue. Now, a single worker checks for new downloads and sends a Sanic signal. Flags have been reworked to use the `api_app`. I removed the `which` flag functionality because the `which` are called at import and needed their own multiprocessing.Event.
@ahopkins I just came across this issue upgrading from 23.6.0 to 23.12.1 and my response timeout is configured to 60 seconds which is not reached before response data is truncated, in my case always at 109kb, so to your question about increasing response_timeout, i don't think so. There's also a thread on discord which seems to be the same issue https://discord.com/channels/812221182594121728/1209575840203939880 |
Is there an existing issue for this?
Describe the bug
The bug is that Sanic closes the connection before the final transfer of all data (see example below)
From my point of view this is a very critical bug. For a reason unknown to me, it repeats only when starting the Sanic server inside the docker container (i.e., perhaps it is somehow related to the networkmode or delays during the work of the docker).
After analyzing the commits, we managed to understand that the bug was introduced in 1310684 and not repeated on d1fc867
Code snippet
So we have really simple server returning some json demo.py:
Running in docker container:
$ docker build -t demo . $ docker run -p 127.0.0.1:8000:8000 --rm -it demo
and the client.py:
I was not able to reproduce by curl, may be it's read too fast... But in real case it's repeats with Nginx proxy and Sanic as upstream.
so now if you will run hundred times you will get something like this:
So length should be 743015 but sanic returns only 586346-652954.
client.py must be runner outside the docker container, e.g on host. If you will run inside the docker it will not reproduce.
Expected Behavior
Return all the data from response.
How do you run Sanic?
Sanic CLI
Operating System
Linux
Sanic Version
v23.12.1
Additional context
No response
The text was updated successfully, but these errors were encountered: