Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory calculations are "not useful" #106

Open
canoeberry opened this issue Oct 24, 2023 · 1 comment
Open

memory calculations are "not useful" #106

canoeberry opened this issue Oct 24, 2023 · 1 comment

Comments

@canoeberry
Copy link

No description provided.

@canoeberry
Copy link
Author

I am trying out puma worker killer and must be misunderstanding it.

I am running in a docker container with 16 Gb RAM and 48 workers. The system thinks 20Gb of ram is being used, but it's largely shared because puma preloads and then forks and it's all shared memory. 'top' reports only 19% of the RAM in use.

However, as requests are made to the rails app, the total size reported by PWK remains largely the same, within a GB, whereas the "top" app shows actual system memory going up and up until puma instances exit rather suddenly due to low memory. That would be fine with me except the main puma thread doesn't seem to notice that the process died in time to prevent it from being handed out to another request. Either that or it's actually dying in the middle of a request, which i suppose is more likely. Either way, clients get errors and that's unacceptable.

So, how can I configure PMK? The initial memory used according to top is 19%, the the memory size used according to PWK is 20Gb. When top hits 95%, PMK thinks it's maybe 21Gb.

So - the problem is that you cannot accurately and efficiently calculate the amount of RAM a process is using all on its own, right? The VmRSS from linux includes shared data. As that shared data slowly transforms into unshared data, there's no outward appearance changes.

[Now I've written this, I am not sure my point, but since I accidentally created this issue, I feel I should at least explain myself.]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant