zombocom/puma_worker_killer

memory calculations are "not useful"

canoeberry opened this issue · 2 comments

memory calculations are "not useful"

I am trying out puma worker killer and must be misunderstanding it.

I am running in a docker container with 16 Gb RAM and 48 workers. The system thinks 20Gb of ram is being used, but it's largely shared because puma preloads and then forks and it's all shared memory. 'top' reports only 19% of the RAM in use.

However, as requests are made to the rails app, the total size reported by PWK remains largely the same, within a GB, whereas the "top" app shows actual system memory going up and up until puma instances exit rather suddenly due to low memory. That would be fine with me except the main puma thread doesn't seem to notice that the process died in time to prevent it from being handed out to another request. Either that or it's actually dying in the middle of a request, which i suppose is more likely. Either way, clients get errors and that's unacceptable.

So, how can I configure PMK? The initial memory used according to top is 19%, the the memory size used according to PWK is 20Gb. When top hits 95%, PMK thinks it's maybe 21Gb.

So - the problem is that you cannot accurately and efficiently calculate the amount of RAM a process is using all on its own, right? The VmRSS from linux includes shared data. As that shared data slowly transforms into unshared data, there's no outward appearance changes.

[Now I've written this, I am not sure my point, but since I accidentally created this issue, I feel I should at least explain myself.]

From inside of a container you need to use the "rolling restarts" feature as containers do not expose the "correct" amount of memory being used details are here https://github.com/zombocom/puma_worker_killer?tab=readme-ov-file#turn-on-rolling-restarts---heroku-mode.