ixti/sidekiq-throttled

Multiple workers sharing same thresholds

popcorn opened this issue · 3 comments

Hey, thanks for sidekiq-throttled, I love it!

My question is: Can I set a threshold that will be used by multiple workers?

Example:
I am using a 3rd-party API that allows me to send 100 requests per minute and I have three workers. All three of them are sending requests to this API.
If I set a threshold at 100 requests per minute on these workers, sometimes due to higher load, 2-3 workers will all try and execute 100 requests per minute and I'll hit the API rate limits.

If I set the threshold of every worker to 33 requests per minute, then I will be fine if all 3 of them get under a higher load. However, if only one of them is under higher load it will only process 33 requests per minute even though I have 67 requests per minute to spare on the API.

Is there a way to solve this problem?

ixti commented

yeah, you can "register" shared bucket and re-use it in multiple workers:

#44 (comment)

@ixti maybe you can help me with two questions.

I have the same task. I need to throttle my requests to an external API. Not more than 30 requests per second.

To check the proposed solution, I created a simple demo:

Sidekiq::Throttled::Registry.add(:telegram_api, {
  threshold: {
    limit: 1,
    period: 1.seconds,
    key_suffix: ->(chat_id) { chat_id }
  }
})

class TestMeJob
  include Sidekiq::Worker
  include Sidekiq::Throttled::Worker

  sidekiq_throttle_as :telegram_api

  def perform(chat_id)
    ap "tm1_#{chat_id}"
  end
end

class TestMe2Job
  include Sidekiq::Worker
  include Sidekiq::Throttled::Worker

  sidekiq_throttle_as :telegram_api

  def perform(chat_id)
    ap "tm2_#{chat_id}"
  end
end

And then: 100.times{|i|TestMeJob.perform_async(rand(2)); TestMe2Job.perform_async(rand(2))}

In logs of Sidekiq, I see the next picture:

2021-01-07T10:05:49.572Z pid=95312 tid=1stw class=TestMeJob jid=35c8b92fa78a20b6ce8a8eb3 INFO: start
2021-01-07T10:05:49.573Z pid=95312 tid=1swc class=TestMeJob jid=1bb3bb4280bac32c0e7c7229 INFO: start
"tm1_0"
"tm1_1"
2021-01-07T10:05:49.575Z pid=95312 tid=1swc class=TestMeJob jid=1bb3bb4280bac32c0e7c7229 elapsed=0.002 INFO: done
2021-01-07T10:05:49.575Z pid=95312 tid=1stw class=TestMeJob jid=35c8b92fa78a20b6ce8a8eb3 elapsed=0.002 INFO: done

2021-01-07T10:05:51.580Z pid=95312 tid=1swc class=TestMe2Job jid=4ade8ec0b3310fac6c8139c2 INFO: start
"tm2_0"
2021-01-07T10:05:51.583Z pid=95312 tid=1swc class=TestMe2Job jid=4ade8ec0b3310fac6c8139c2 elapsed=0.002 INFO: done

2021-01-07T10:05:53.587Z pid=95312 tid=1swc class=TestMeJob jid=1ab21069173279d8850a6369 INFO: start
2021-01-07T10:05:53.587Z pid=95312 tid=1stw class=TestMeJob jid=d5b612a32bd152eb635b7d97 INFO: start
"tm1_1"
2021-01-07T10:05:53.589Z pid=95312 tid=1stw class=TestMeJob jid=d5b612a32bd152eb635b7d97 elapsed=0.002 INFO: done
"tm1_0"
2021-01-07T10:05:53.589Z pid=95312 tid=1swc class=TestMeJob jid=1ab21069173279d8850a6369 elapsed=0.002 INFO: done

2021-01-07T10:05:55.595Z pid=95312 tid=1swc class=TestMe2Job jid=4177aefa52b4bfcd53b2557d INFO: start
"tm2_0"
2021-01-07T10:05:55.597Z pid=95312 tid=1swc class=TestMe2Job jid=4177aefa52b4bfcd53b2557d elapsed=0.002 INFO: done
2021-01-07T10:05:55.603Z pid=95312 tid=1swc class=TestMe2Job jid=0087e523886b521395608feb INFO: start
"tm2_1"
2021-01-07T10:05:55.604Z pid=95312 tid=1swc class=TestMe2Job jid=0087e523886b521395608feb elapsed=0.002 INFO: done

2021-01-07T10:05:57.609Z pid=95312 tid=1stw class=TestMe2Job jid=30080410bda2864586e4332b INFO: start
"tm2_0"
2021-01-07T10:05:57.611Z pid=95312 tid=1stw class=TestMe2Job jid=30080410bda2864586e4332b elapsed=0.002 INFO: done

2021-01-07T10:05:59.617Z pid=95312 tid=1swc class=TestMe2Job jid=7dd1fad843eff5dbd54bad16 INFO: start
"tm2_1"
2021-01-07T10:05:59.618Z pid=95312 tid=1swc class=TestMe2Job jid=7dd1fad843eff5dbd54bad16 elapsed=0.002 INFO: done

2021-01-07T10:06:01.622Z pid=95312 tid=1stw class=TestMeJob jid=27b9406641b291d0eb44e972 INFO: start
"tm1_1"
2021-01-07T10:06:01.624Z pid=95312 tid=1stw class=TestMeJob jid=27b9406641b291d0eb44e972 elapsed=0.002 INFO: done
  1. for some reason it runs not every second
  2. periodically it uses "tick" to run only one job, while there are still jobs in the queue with a unique chat_id

My goal is to process these jobs as fast as possible, so tyring to find a solution for these questions. Can I somehow change this behaviour?

ixti commented

Should be fixed in v1.0.0.alpha - I have removed cooldowns.