New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[thread] close sockets at graceful shutdown #1230
Conversation
The run loop has to change slightly to support graceful shutdown. There is no way to interrupt a call to `futures.wait` so instead the pattern, used by the async workers, is to sleep for only one second at the most. The poll is extended to a one second timeout to match. Since threads are preemptively scheduled, it's possible that the listener is closed when the request is actually handled. For this reason it is necessary to slightly refactor the TConn class to store the listening socket name. The name is checked once at the start of the worker run loop. Ref #922
Open questions:
|
@tilgovi sorry for the delay... response follows
|
@@ -205,33 +207,37 @@ def run(self): | |||
# can we accept more connections? | |||
if self.nr_conns < self.worker_connections: | |||
# wait for an event | |||
events = self.poller.select(0.02) | |||
events = self.poller.select(1.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may want to return faster in the loop there. Why increasing the wait time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why faster? We sleep for 1.0 in gevent and eventlet. The granularity here only matters for closing keep-alive connections and responding to graceful shutdown signals. The poll returns as soon as a connection is ready for accept or ready to read a keep-alive request, and the actual requests are handled in other threads. This main thread should sleep as long as it can.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the reason to be faster there is that this selector is done on read to check if the socket enter in a state where we can accept. If we wait too much we can queue more accepts than we want and we fail to balance correctly the connections between threads and workers. So we should be fast there imo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will return as soon as the first socket is ready. There is no danger
such as you describe.
On Fri, Mar 25, 2016, 03:10 Benoit Chesneau notifications@github.com
wrote:
In gunicorn/workers/gthread.py
#1230 (comment):@@ -205,33 +207,37 @@ def run(self):
# can we accept more connections?
if self.nr_conns < self.worker_connections:
# wait for an event
events = self.poller.select(0.02)
events = self.poller.select(1.0)
the reason to be faster there is that this selector is done on read to
check if the socket enter in a state where we can accept. If we wait too
much we can queue more accepts than we want and we fail to balance
correctly the connections between threads and workers. So we should be fast
there imo.—
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
https://github.com/benoitc/gunicorn/pull/1230/files/f2418a95e0df21ae53fc8dba96995e18b2efa961#r57432600
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hrm actually probably. go ahead then :)
…close [thread] close sockets at graceful shutdown
The run loop has to change slightly to support graceful shutdown.
There is no way to interrupt a call to
futures.wait
so insteadthe pattern, used by the async workers, is to sleep for only one
second at the most. The poll is extended to a one second timeout
to match.
Since threads are preemptively scheduled, it's possible that the
listener is closed when the request is actually handled. For this
reason it is necessary to slightly refactor the TConn class to store
the listening socket name. The name is checked once at the start of
the worker run loop.
Ref #922