Cannot set memory limits lower than 450MB
ineu opened this issue · 3 comments
I have a pod consuming 115MB of RAM. I tried to set limits of 128MB, 256MB etc but the lowest one that worked was 450MB. Looks like the runner itself requires this amount of RAM, so the pod gets killed by the OOM killer before the application starts. I see the following dmesg:
[1655181.702032] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
[1655181.702176] [23241] 2000 23241 4497 69 14 3 0 984 bash
[1655181.702177] [23254] 2000 23254 4497 62 15 3 0 984 bash
[1655181.702179] [23255] 2000 23255 140260 102204 211 5 0 984 objstorage
[1655181.702185] Memory cgroup out of memory: Kill process 23255 (objstorage) score 1980 or sacrifice child
[1655181.702513] Killed process 23255 (objstorage) total-vm:561040kB, anon-rss:408816kB, file-rss:0kB
Not sure what the objstorage is, but it is pretty greedy.
UPD: docker-based pods are fine. Just set a limit for one of them to 16MB, works well.
It's the fault of not slugrunner itself, but tar, unpacking slug before running.
On slugrunner start it downloads the "slug" (you see it in the end of slugbuilder
log. Something like -----> Compiled slug size is 241M
), unpack it to /app
, change the directory and run corresponding command from Procfile or buildpack.
So the whole process looks like this:
slugrunner is starting, tar eats all the memory, oomkiller here, slugrunner is restarting, tar eats all the memory, …
you get the idea
We should find the way to limit tar memory usage to about 2/3 of .resourceFieldRef.resource.limits.memory
This issue was moved to teamhephy/slugrunner#3