ixti/sidekiq-throttled

Allow throttle queues

ixti opened this issue ยท 5 comments

ixti commented

It's possible to throttle queue by using shared throttling key, but that causes jobs to be pushed back to the end of the queue. It would be nice to have real queue throttling, where job would have been pushed back to the head of the queue, and queue have been paused from fetching for some time.

See: #122

Can you give an example of throttling a queue by a shared key? The docs only seem to mention a key_suffix?

My scenario: I have multiple job classes that run on the expensive queue. I only want one job at a time to run on the expensive queue.

@joevandyk below is what we're doing right now for throttling multiple jobs with a shared key (not necessarily throttling the entire queue, but if only jobs that have this throttle are put in the queue, then the queue is essentially throttled)

in a Rails initializer file,

# For Users::Engagement::SetSendsJob, SetOpensJob, and SetClicksJob
Sidekiq::Throttled::Registry.add(:users_engagement_set_etl,
                                 # Only allow 1 job per tenant at a time, for up to 20 tenants at at time
                                 concurrency: [
                                   { limit: 1, key_suffix: ->(args) { args['tenant'] } },
                                   { limit: 20 },
                                 ])

Then, in the individual jobs, we add

sidekiq_throttle_as :users_engagement_set_etl

I am testing the usage of this as a migration path from sidekiq-limit_fetch
I followed @mnovelo sample and registered multiple concurrency throttles based on the queues and applied it to the relevant jobs

having concurrently limited jobs in different queues causes an unexpected effect:
let's say we have 3 queues, high/normal/low , 10 threads in this example

if all jobs enqueued in the high queue have a shared concurrency limit of 3, sidekiq would run 3 as expected, but would not fill the rest of the threads with remaining low jobs

this is because how the sidekiq queues work https://github.com/sidekiq/sidekiq/wiki/Advanced-Options#queues , but need to be considered for queue-based throttling
queue weights might help adjust this

Oh that's interesting @jcsrb. We strictly use weighted queues so we've not run into that

may i jump in?

i am looking for a way to throttle a queue without any keys or whatever. reason to do this is i have an api provider where i can consume over various endpoints with a limit of 3000 requests per minute, regardless of what endpoint i hit. so i thought if i could have a queue for that provider and set a limit of say 2500 reqs/minute i'd be pretty much done.

would this very issue add that kind of functionality or would i need to bring that up in a different one? or is this even already supported and my brain just wasnt bright enough to get that out of the docs?

tbanks so much for this awesome gem.