kzk/unicorn-worker-killer

Gem is not working in Rails 4.1.2..?

Opened this issue · 15 comments

I am running rails 4.1.2 and ruby 2.1.2. I see no oom method error. I can submit more details if you confirm if it working. Thanks.

I'd accept a pull-request on it but probably will not be able to this problem myself for a while. @kzk You seen this?

@joshidhruv I'm running with rails 4.1.4 and ruby 2.1.2, without any problems, did you load the gem after the unicorn gem?

@jvanbaarsen Yes I did. Also I used heroku api to do that. But my problem is I am getting memory exceed error in heroku. And if I shutdown my worker then my background job with sidekiq is also ends. I have memory leak in app. and its so hard to find out. Thanks for the help ..

@joshidhruv Ah ok, so the problem is not that you get an NoMethodError but your processes are crashing? In that case, how many dyno's are you running, and how many unicorn processes are you spawning? And what is the basic memory footprint of a normal running process for your app?

I am sorry I was not clear. I had the NoMethodError and I am not sure why. I stop using it because I didnt found any other solution.

Now for my app-- I have 2 (1x) dynos so its 512 mb + 1 (2x) worker + 1(small) Redis instance. I have 3 worker process in (config/unicorn.rb) and 1 concurrency in (config/sidekiq.rb). I am not sure if you know much about sidekiq but its background job scheduler/worker.

Ok, you have an idea of how much memory a single process is using? Because when a single process is using more then 170 Mb, your dyno will run out of memory, did you try to lower the unicorn process?

When I run it local Activity monitor shows me ~600 MB of RAM usage. and I am not sure what you mean by basic footprint. But in short my app is webscrapper and it willl find the word in like 10000 websites.

Now I am not using dyno to do my work. I have bought worker which will do this job. I dont see dyno having memory issue in logs. But worker is complaining. I have 1GB ram for worker and it will go out of bound.

on Heroku documentation they suggest to have more than 2 worker process and i did try with 2 but didnt help.

@joshidhruv Hm ok, afraid i cant help you :(

no worries I am trying to move to delay job and see whats the deal. Thank you @jvanbaarsen for taking part.

kzk commented

@joshidhruv Can you paste the exact exception you've got? It's hard to 'guess' your situation for us.

Hey @kzk this is my /config.ru file look like.

require 'unicorn/worker_killer'
oom_min = (800) * (10242)
oom_max = (900) * (1024
2)

Max memory size (RSS) per worker

use Unicorn::WorkerKiller::Oom, oom_min, oom_max
require ::File.expand_path('../config/environment', FILE)
run Rails.application

Its set up on heroku and I have only 2 dynos and 1 worker. In unicorn.rb file I have mention this

worker_processes 3
timeout 30
preload_app true

And since I update the gem file it does not show me any error but its not working. I can see my worker memory going to 1 gb.

mool commented

I had the same problem with Ruby 2.1.2 and to make it work I had to add a require 'unicorn' before the require 'unicorn/worker_killer'. Here is my config:

if defined?(Unicorn)
  require 'unicorn'
  require 'unicorn/worker_killer'

  oom_min = (300) \* (1024**2)
  oom_max = (320) \* (1024**2)

  use Unicorn::WorkerKiller::Oom, oom_min, oom_max
end

require ::File.expand_path('../config/environment', FILE)
run Rails::Application

@joshidhruv How were you able to see what your unicorn worker was consuming?

I am not on Heroku, so SSH'ing into the instance and doing ps aux --sort -rss. Will show me an rss column with values in kilobytes. But am not sure if that is entirely accurate

same here, will try your fix @mool

Is this still an issue or can we close this issue?