Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make graceful shut-down keep-alive behavior consistent #1236

Open
3 tasks done
tilgovi opened this issue Mar 25, 2016 · 22 comments
Open
3 tasks done

Make graceful shut-down keep-alive behavior consistent #1236

tilgovi opened this issue Mar 25, 2016 · 22 comments

Comments

@tilgovi
Copy link
Collaborator

tilgovi commented Mar 25, 2016

Following on from #922, the handling of keep-alive connections during graceful shutdown is not really specified anywhere and may not be consistent among workers.

  • Describe the intended behavior
  • Take stock of current behavior and make issues for each worker
  • Ship it
@benoitc
Copy link
Owner

benoitc commented Apr 7, 2016

@tilgovi do you need anything from me there?

@tilgovi
Copy link
Collaborator Author

tilgovi commented Apr 7, 2016

If you want to "describe the intended behavior" that would be helpful, otherwise I'll propose it.

@benoitc
Copy link
Owner

benoitc commented Oct 16, 2016

Sorry I missed your answer.

Graceful shutdown only means we let a time for the requests to finish.

  1. when the signal is received, the workers stops to accept any new connections
  2. At the graceful time, all still running client connections are closed (sockets are closed), kept alived or not.

Speaking of keepalive connections I think we should stop the request loop when the signal is received instead of accepting any new requests. Thoughts?

@tuukkamustonen
Copy link

At the graceful time, all still running client connections are closed (sockets are closed), kept alived or not.

@benoitc Considering a sync worker, without threads, I believe no other connections are aborted/lost than the current request in-process, because connections are queued at the master process and not at the worker process level?

If so, can you clarify what you mean by "all still running client connections are closed" - I assume you refer to threaded/async workers here (where multiple requests may be processed concurrently, compared to sync worker without threads)?

@benoitc
Copy link
Owner

benoitc commented Jun 29, 2018

@tuukkamustonen master doesn’t queue any connection. each workers is responsible to accept a connection. afaik connections are queued at the system level. When the master receive the hup signal it notify the worker about it and they stop to accept new connections. Then running connections (those already accepted) will have the graceful time to finish or be forcefully closed.

@tuukkamustonen
Copy link

afaik connections are queued at the system level

Ah, I wonder how that works / where it's instructed... well, no need to go that deep :)

When the master receive the HUP signal it notify the worker about it and they stop to accept new connections. Then running connections (those already accepted) will have the graceful time to finish or be forcefully closed.

Ok. This summarizes it nicely.

@benoitc
Copy link
Owner

benoitc commented Oct 9, 2018

@tilgovi we probably should close that issue?

@tilgovi
Copy link
Collaborator Author

tilgovi commented Oct 9, 2018

I would like to keep this one open. I'm not convinced we have consistent behavior here yet.

@vgrebenschikov
Copy link

vgrebenschikov commented Nov 15, 2019

How to reproduce problem:
(in fact problem easily can be observed on busy production during graceful shutdown)

  1. Run gunicorn with keepalive (trivial app returning some data)
    $ gunicorn --max-requests 512 --keep-alive 2 --threads 20 --workers 4 t:app

run apache benchmark:

$ ab -n 10000 -c 20 -s 1 -k -r 127.0.0.1:8000/

...
Concurrency Level: 20
Time taken for tests: 2.693 seconds
Complete requests: 10000
Failed requests: 435

See > 4% failed requests just due to restarted workers (in this case by max-requests)

  1. Run gunicorn without keepalive (same app)
    $ gunicorn --max-requests 512 --keep-alive 0 --threads 20 --workers 4 t:app

$ ab -n 10000 -c 20 -s 1 -k -r 127.0.0.1:8000/

...
Complete requests: 10000
Failed requests: 0

See no failed requests

tried on gunicorn up to 20.0.0

@vgrebenschikov
Copy link

vgrebenschikov commented Nov 15, 2019

Probably it worth resolve problem in a following way:

In case of graceful shutdown on keep-alive connection try to serve one more request after graceful shutdown request and send Connection: close in response to force sender not use this socket any more for next request, if no request arrived in reasonable timeframe (i.e. 1s) just close connection.

Yes, there is small possibility for race (when server decides to close when client sends request),
but this will completely close window for problems on heavy load when requests are followed one by one.

@benoitc
Copy link
Owner

benoitc commented Nov 23, 2019

cc @tilgovi ^^

In fact there is two schools there imo

  1. we consider the connection is already living so we could accept one or more quest inside during the graceful time (what you're describing). Ie we keep open the connections as long as we receive a request on them and we are in the graceful time.
  2. if HUP or USR2 is sent to gunicorn, it means we want to change as fast as possible the configuration or the application version. In such case the gracefultime is here so we make sure we terminate the current requests cleanly (finishing a transaction, etc...). But we don't want any new request on the old version. In such case it make sense to also not accept new requests on keepalived connections and close them once the current request terminate.

I'm in favour of 2 which may be more safe. Thoughts?

@tilgovi
Copy link
Collaborator Author

tilgovi commented Nov 23, 2019

I am very much in favor of Option 2. That was the behavior I've assumed and I've made some changes to this end in, linked from #922.

I don't know which workers implement all of these behaviors, but we should check:

  1. Close the socket. This is necessary in case Gunicorn is being shut down and not reloaded. The OS should be notified as soon as possible to stop accepting requests so that new requests can be directed to a different node in a load balanced deployment.

  2. Close connections after the next response. Do not allow connection keep-alive. For the reasons you state, it is best that any future request go to the new version of the code, or to another node.

Any other behaviors we should describe before we audit?

@tilgovi
Copy link
Collaborator Author

tilgovi commented Nov 23, 2019

Close the socket. This is necessary in case Gunicorn is being shut down and not reloaded. The OS should be notified as soon as possible to stop accepting requests so that new requests can be directed to a different node in a load balanced deployment.

This is #922. I think it is done for all workers and the arbiter.

Close connections after the next response. Do not allow connection keep-alive. For the reasons you state, it is best that any future request go to the new version of the code, or to another node.

This is this ticket. We should make sure all workers do this.

@tilgovi
Copy link
Collaborator Author

tilgovi commented Apr 21, 2020

Close the socket. This is necessary in case Gunicorn is being shut down and not reloaded. The OS should be notified as soon as possible to stop accepting requests so that new requests can be directed to a different node in a load balanced deployment.

This is #922. I think it is done for all workers and the arbiter.

I think this is still done, but we have an new issue due to this at #1725. The same issue might exist for other worker types than eventlet.

Close connections after the next response. Do not allow connection keep-alive. For the reasons you state, it is best that any future request go to the new version of the code, or to another node.

This is this ticket. We should make sure all workers do this.

I think this is now done for the threaded worker and the async workers in #2288, ebb41da and 4ae2a05.

@tilgovi
Copy link
Collaborator Author

tilgovi commented Apr 21, 2020

I'm going to close this issue because I think it's mostly addressed now. I don't think the tornado worker is implementing graceful shutdown, but that can be a separate ticket.

@tilgovi
Copy link
Collaborator Author

tilgovi commented Apr 21, 2020

I've opened #2317 for Tornado and I'll close this.

@tilgovi tilgovi closed this as completed Apr 21, 2020
@vgrebenschikov
Copy link

vgrebenschikov commented Jan 19, 2021

cc @tilgovi ^^
...
2. if HUP or USR2 is sent to gunicorn, it means we want to change as fast as possible the configuration or the application version. In such case the gracefultime is here so we make sure we terminate the current requests cleanly (finishing a transaction, etc...). But we don't want any new request on the old version. In such case it make sense to also not accept new requests on keepalived connections and close them once the current request terminate.

I'm in favour of 2 which may be more safe. Thoughts?

Probably, I was not clear enough ... for keep-alive connection there are no way close connection "safe",
client is admissible to send next request just after receiving response (without any wait), and then, if we will just close connection after last request finished and HUP signaled - we, for sure, can fail next request unexpectedly (see experiment above which prove that).

So, only "safe" way will be either

  1. get a "waiting" window, when client does not send next request and just wait on keep-alive socket
    (no clue how to get it with guarantee, just some approximation)
    or
  2. send as minimum one response with Connection:close before actual closing connection, but, we may wait for long until next request ...

@tilgovi
Copy link
Collaborator Author

tilgovi commented Jan 24, 2021

The gevent and eventlet workers do not have any logic to close keepalive connections during graceful shutdown. Instead, they have logic to force "Connection: close" on requests that happen during graceful shutdown. So, I believe it is already the case that they will send a "Connection: close" before actually closing the connection.

There is always a possibility that a long request ends close enough to the graceful timeout deadline that the client never gets to send another request and discover "Connection: close" before the server closes the connection forcefully. I don't see any way to avoid that. Set a longer graceful timeout to handle this.

@tilgovi
Copy link
Collaborator Author

tilgovi commented Feb 16, 2021

Re-opening until I can address issues in the eventlet and threaded server.

Just to re-iterate, on graceful shutdown a worker should:

  • Continue to notify the arbiter that it is alive.
  • Stop accepting new requests.
  • Close the socket.
  • Force Connection: close for any future requests on existing keep-alive connections.
  • Shutdown when there are no further connections or the timeout expires.

Right now, the eventlet worker cannot handle existing keep-alive connections because it fails on listener.getsockename() after the socket is closed. The threaded worker is not handling existing keepalive requests and is not notifying the arbiter that it is still alive.

I'll work to get both PRs submitted this week. I apologize for not being able to do so sooner.

@tilgovi tilgovi reopened this Feb 16, 2021
@vupadhyay
Copy link

gunicorn = "20.0.4"
worker_class = "gthread"
threads = 20
workers = 1
keepalive = 70
timeout = 500

In my case, when gunicorn master receives SIGHUP signal (sent by consul-template to reload refreshed secrets written in a file on local disk), it creates a new worker and gracefully shuts down old worker. However, during the transition from old to the new worker, http connections cached b/w client and old worker (keep-alive connections) are stale now and any request sent by client to the server that happen to use stale socket will hung and eventually timeout.

Essentially, the threaded worker is not able to handle existing keep-alive requests.

@ccnupq
Copy link

ccnupq commented Jun 22, 2023

Hi @tilgovi
Is there any update for this issue?

Can this issue be closed ?

@tilgovi
Copy link
Collaborator Author

tilgovi commented Dec 29, 2023

There are still issues as documented in my last comment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Development

No branches or pull requests

6 participants