Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In fork_worker mode, worker_culling_strategy "oldest" shouldn't cull worker 0 #2794

Merged

Conversation

johnnyshields
Copy link
Contributor

PR #2773 introduced worker_culling_strategy = :oldest which was designed to improve memory efficiency by culling "leaky" older workers first.

In :fork_worker mode however, this option culls worker0 (which tends to be the oldest), which seems contrary to the intention of improving memory efficiency. This will spawn an entirely new worker0 process, and the other childs (n >= 1) will not have any CoW memory share with this process.

Instead, it's better to exclude worker0 from the culling in :fork_worker mode.

@nateberkopec nateberkopec added feature waiting-for-review Waiting on review from anyone labels Jan 9, 2022
@nateberkopec
Copy link
Member

Interesting. I guess that makes sense - in fork_worker mode, worker0 is sort of like our master process. Makes sense to me and prevents an accidental "footgun".

@nateberkopec nateberkopec merged commit 5f255fc into puma:master Jan 10, 2022
@johnnyshields johnnyshields deleted the fork-worker-shouldnt-cull-worker-zero branch January 10, 2022 16:32
JuanitoFatas pushed a commit to JuanitoFatas/puma that referenced this pull request Sep 9, 2022
…worker 0 (puma#2794)

* When using fork_worker mode, worker_culling_strategy "oldest" shouldn't cull worker 0.

* Consider case where worker0 isn't oldest worker

* Reduce code complexity

* Add code comment

Co-authored-by: shields <shields@tablecheck.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature waiting-for-review Waiting on review from anyone
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants