Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log warning when max worker count is 1 #2534

Closed
nateberkopec opened this issue Jan 24, 2021 · 9 comments · Fixed by #2565
Closed

Log warning when max worker count is 1 #2534

nateberkopec opened this issue Jan 24, 2021 · 9 comments · Fixed by #2565
Labels

Comments

@nateberkopec
Copy link
Member

I'm struggling to think of a good reason to use cluster w/1 worker vs single mode.

@MSP-Greg
Copy link
Member

MSP-Greg commented Jan 25, 2021

Yeah, that question has kept me up a few nights...

Maybe the real question is what benefit would there be using one worker instead of more? If your platform has fork, two workers (or more) allows a phased-restart, etc?

EDIT: There may some edge cases where cluster with one worker might be better than single mode, but I think a log warning is probably a good idea.

@vizcay
Copy link
Contributor

vizcay commented Jan 25, 2021

Hi @nateberkopec I've asked that specific question 4 years ago: #1426

@nateberkopec
Copy link
Member Author

@vizcay indeed.

Will leave open for a while for comment if someone can provide a good reason.

@cjlarose
Copy link
Member

FWIW I've found this "feature" to be useful in testing clusters. The bundler preservation tests and tests around gem changes in phased restarts both use single-worker clusters.

So we want to keep the feature, but logging a warning message seems appropriate.

@cjlarose
Copy link
Member

cjlarose commented Jan 25, 2021

One potential use case I can think of is that a single-worker cluster might be more resilient to full application crashes. In a cluster, if the worker process exits because of an unexpected failure (segfault or something), the cluster will restart the worker.

In single-mode, it's all one process, so the whole process would die. Of course, users should be using a process manager anyway that'll restart the single-mode process. What users lose here is that the sockets would be unbound-and-rebound so they could lose new connections when the failover happens. In the single-worker cluster situation, the sockets would remain listening.

Still, printing a warning in this case probably wouldn't hurt users if they really wanted a single-worker cluster. I see it as a kind of "do this only if you know what you're doing" kind of warning. For most folks, it's not what they want.

@vizcay
Copy link
Contributor

vizcay commented Jan 27, 2021

What users lose here is that the sockets would be unbound-and-rebound so they could lose new connections when the failover happens.

Is my understanding that by using a systemd socket and service binded the socket will queue requests while the service restarts.

For most folks, it's not what they want.

Exactly, I was surprised the first time with this behaviour.

@cjlarose
Copy link
Member

Is my understanding that by using a systemd socket and service binded the socket will queue requests while the service restarts.

That's true. If you happen to be using systemd's inherited sockets, you're good to go. But any other process manager might not have the same feature.

@CGA1123
Copy link
Contributor

CGA1123 commented Feb 7, 2021

I think this is well worth a warning at least. I recently noticed that we were running 1 worker clusters in production for the majority of our applications.

Switching to single mode saved ~15% RAM across our applications, letting us increase the thread count or bringing us under the memory limitations of our servers and avoiding swapping! We switched over in Oct last year, and haven't noticed any adverse effects on our setup.

There are graphs and such attached on a write-up I did about this.

nateberkopec pushed a commit that referenced this issue Mar 9, 2021
* Print warning when running one-worker cluster

Running Puma in cluster-mode is likely a misconfiguration in most
scenarios.

Cluster mode has some overhead of running an addtional 'control' process
in order to manage the cluster. If only running a single worker it is
likely not worth paying that overhead vs running in single mode with
additional threads instead.

There are some scenarios where running cluster mode with a single worker
may still be warranted and valid under certain deployment scenarios, see
the linked issue for details.

From experience at work, we were able to migrate our Rails applications
from single worker cluster to single-mode and saw a reduction of RAM
usage of around ~15%.

Closes #2534

* Remove link to issue

* Add #silence_single_worker_warning option

* Test single_worker_warning
JuanitoFatas pushed a commit to JuanitoFatas/puma that referenced this issue Sep 9, 2022
* Print warning when running one-worker cluster

Running Puma in cluster-mode is likely a misconfiguration in most
scenarios.

Cluster mode has some overhead of running an addtional 'control' process
in order to manage the cluster. If only running a single worker it is
likely not worth paying that overhead vs running in single mode with
additional threads instead.

There are some scenarios where running cluster mode with a single worker
may still be warranted and valid under certain deployment scenarios, see
the linked issue for details.

From experience at work, we were able to migrate our Rails applications
from single worker cluster to single-mode and saw a reduction of RAM
usage of around ~15%.

Closes puma#2534

* Remove link to issue

* Add #silence_single_worker_warning option

* Test single_worker_warning
@Exterm1nate
Copy link

And also embedded Sidekiq (v7.0) currently requires configuration inside on_worker_boot and on_worker_shutdown, so the only way to use it without multiple Puma workers (e.g. in development) is to set workers count to 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants