Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow restarting pool #140

Merged
merged 2 commits into from Jan 28, 2021
Merged

Allow restarting pool #140

merged 2 commits into from Jan 28, 2021

Conversation

amarshall
Copy link
Contributor

The implementation of shutdown from #27 does not actually provide for a way for the pool to re-create connections, only render the pool unusable. This implements such a behavior. A new method is added so as to not change the existing behavior of shutdown.

@mperham
Copy link
Owner

mperham commented Dec 15, 2020

But why?

@mperham
Copy link
Owner

mperham commented Dec 15, 2020

What's wrong with ConnectionPool.new...?

@amarshall
Copy link
Contributor Author

It requires updating every reference to the Pool held everywhere, which in some cases is cumbersome.

@mperham
Copy link
Owner

mperham commented Dec 15, 2020

Ok, then I think I have a problem with the naming. Once something is shutdown, it should not be reused. Perhaps we need a reload method which is used instead of shutdown? Can you explain the higher-level use case you are trying to solve? Reloading connections when DNS changes, or similar?

@amarshall
Copy link
Contributor Author

Once something is shutdown, it should not be reused. Perhaps we need a reload method which is used instead of shutdown?

“Restart” seems the obvious name for first shutting-down, then “starting” to put back into a usable state (which is what this method does). So while I disagree that “once shutdown it should not be reused”, I’m not here to bikeshed and so will defer to you on naming semantics.

Can you explain the higher-level use case you are trying to solve?

Similar to the referenced previous issue: all connections in the pool need to all be recreated after forking.

@mperham
Copy link
Owner

mperham commented Dec 15, 2020

Got it, thanks for the context, now I understand what we are trying to achieve. I think I prefer reload. Can you implement it without adding a new method to TimedStack, i.e. shutdown(reload=false, &block)?

@amarshall
Copy link
Contributor Author

Sure, just in TimedStack or also in ConnectionPool?

@mperham
Copy link
Owner

mperham commented Dec 15, 2020

I think the public API should be more explicit: ConnectionPool#reload with RDoc that explains the use case.

Also reword those of TimedStack#shutdown for parity.
@amarshall
Copy link
Contributor Author

See updated commits.

@amarshall
Copy link
Contributor Author

Checking in to see if there’s any further feedback as it’s been a bit, thanks.

@mperham
Copy link
Owner

mperham commented Jan 21, 2021

Out of curiosity, what type of connections are you pooling? I ask because many Ruby connection libraries automatically recycle the connection if they detect a PID change (i.e. after a fork).

@@ -93,6 +95,7 @@ def shutdown(&block)

shutdown_connections
end
@shutdown_block = nil if reload
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this line go inside the mutex sync block? Otherwise there appears to be a race with pop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You’re correct, if pop L70 is called in another thread between L96 & L98 then pop will incorrectly raise PoolShuttingDownError. I’ve amended the commit to move this inside the synchronize.

@amarshall
Copy link
Contributor Author

In this particular case we’re using the redis gem. It does detect PID-change, but raises (src).

@mperham
Copy link
Owner

mperham commented Jan 27, 2021

Please update the changelog and readme.

The implementation of shutdown from
mperham#27 does not actually
provide for a way for the pool to re-create connections, only render the
pool unusable. This implements such a behavior. A new method is added so
as to not change the existing behavior of `shutdown`.
@amarshall
Copy link
Contributor Author

Updated.

@mperham
Copy link
Owner

mperham commented Jan 28, 2021

Well done, thanks for your hard work and patience!

@mperham mperham merged commit 3eaba66 into mperham:master Jan 28, 2021
@amarshall amarshall deleted the restart branch January 28, 2021 19:07
@jbielick
Copy link

This method is highly appreciated. Thanks for working on it!

casperisfine pushed a commit to casperisfine/net-http-persistent that referenced this pull request Sep 26, 2022
Ref: mperham/connection_pool#140

When forking a process, you need to close existing connection
to avoid sharing them across processes. `shutdown` does that, but
it also mark the pool as no longer usable.

`connection_pool` `2.2.4` introduced `#reload` that discard existing
connections, but let the pool be used again later. It's a much better
fit for an `after_fork` callback.
casperisfine pushed a commit to casperisfine/net-http-persistent that referenced this pull request Sep 26, 2022
Ref: mperham/connection_pool#140

When forking a process, you need to close existing connection
to avoid sharing them across processes. `shutdown` does that, but
it also mark the pool as no longer usable.

`connection_pool` `2.2.4` introduced `#reload` that discard existing
connections, but let the pool be used again later. It's a much better
fit for an `after_fork` callback.
casperisfine pushed a commit to casperisfine/net-http-persistent that referenced this pull request Sep 26, 2022
Ref: mperham/connection_pool#140

When forking a process, you need to close existing connection
to avoid sharing them across processes. `shutdown` does that, but
it also mark the pool as no longer usable.

`connection_pool` `2.2.4` introduced `#reload` that discard existing
connections, but let the pool be used again later. It's a much better
fit for an `after_fork` callback.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants