New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draining connections on close #3617
Comments
I have researched this in great detail and it is at odds with one of our goals, performance. The problem is that in order to close immediately we need to track the state of all sockets, e.g. adding them on a list when they are idle/keepalive and remove them when they are active. This was unfortunately slow a few years back. I think we would have much more success in implementing this in Node itself in C++, but a check if this could be done without significant overhead here would be awesome. |
Something must be keeping track of theses connections in order to reuse them. |
I am really confused about what |
we push to the Lines 369 to 381 in 93fe532
when it call then execute |
Thanks. That makes the waters a little less murky. Unfortunately, adding a |
Nothing. Some more bytes pops up on the socket, those are parsed by Node's HTTP parser, this is then translated to a |
That really complicates being able to clean up a process on shutdown. |
As far as I recall adding that tracking cost 30% of throughput to Hapi back in the days |
Resolved by #3619 |
I'm putting my example, which I've been using prior to this feature being available, here for discussion, since the new for (const socket of socketsSet) {
const serverResponse = socket._httpMessage;
if (serverResponse?.headersSent === false) {
// this socket is an active request, so give it some time to finish naturally
serverResponse.setHeader('connection', 'close');
continue;
}
// this is likely a keep-alive socket that is not part of an active request
socket.destroy();
socketsSet.delete(socket);
} but then of course, I have to add some sanity to my graceful shutdown process so a client connection can't keep it from closing down forever... setTimeout(() => {
throw new Error('gracefulStopAsync timed out');
}, 10000).unref(); |
Thank you. But two things:
|
Thanks for the feedback. It seemed to work for my use case, but I wrote that a year ago and haven't looked at it since. I was mainly just trying to start some discussion on how to use the newly supported feature alongside some type of detection for in-progress requests to allow some time for them to gracefully close before forcefully closing them. Reviewing the PR, the way the new feature is written doesn't appear that it can be hooked into any way, so I was trying to figure out what to do - use my method of tracking - or the core feature and live with sometimes forcefully closing in-progress requests. |
Please subscribe to nodejs/node#41578 If that feature gets implemented, we would deprecate this feature in Fastify and replace it with direct core support. The implementation in core should consider if connections have active requests and allow them to finish before terminating the connection. As noted in the docs with this feature in Fastify, it is a very forceful close. Use with care. |
Given the simple service:
We can start the server and then issue the following request to it:
Subsequently, we send
SIGINT
to the server to indicate we want it to shutdown. At this point, we will see that the server does not shutdown before the keep-alive has completed. Our tests show that new connections are not possible, and re-usage of the existing connection should be met with a 503: https://github.com/fastify/fastify/blob/93fe532986b96ae28a087b2ec385a8abb16cce55/test/close.test.jsVideo showing server wait
fastify-shutdown.mp4
What we do not seem to have is a forceful termination of those open connections. I recognize that the spec says:
But when we are shutting down, we usually don't want to do that.
I think our
fastify.close
method is directly mapped to theavvio
close method. If we want to support forceful connection draining, would we do it here?:fastify/lib/server.js
Lines 177 to 179 in 93fe532
The text was updated successfully, but these errors were encountered: