que-rb/que

Empty job buffer not refilled when any worker priority > highest job priority

oeoeaio opened this issue · 16 comments

Expected behaviour

If workers of an appropriate priority are running then the locker/poller should work to push available jobs to workers as soon as they run out of jobs. This happens most of the time.

Actual behaviour

With particular worker / job priority combinations, polling seems to only occur (or occur successfully) on the polling interval rather than when workers run out of jobs.

Steps to reproduce

  • Enqueue lots of no-op jobs with priority of 99
  • Run que with the following config:

--worker-count 2 --worker-priorities 98,99 --poll-interval 5 --maximum-buffer-size 0

  • Two jobs will be processed every 5 seconds

--worker-count 2 --worker-priorities 98,99 --poll-interval 5 --maximum-buffer-size 5

  • Six jobs will be processed every 5 seconds
  • Assume buffer_size + 1 is some quirk of how jobs are fed to workers?

--worker-count 2 --worker-priorities 99,99 --poll-interval 5 --maximum-buffer-size 0

  • Jobs will be processed continuously, with the poller pushing new jobs to workers as soon as they are free

--worker-count 2 --worker-priorities 99,150 --poll-interval 5 --maximum-buffer-size 0

  • Same as above, jobs will be processed continuously, with the poller pushing new jobs to workers as soon as they are free

Haven't had a chance to look and understand what is causing this yet, but just wanted to document this here so I don't forget.

I assume you meant --worker-count; but that's actually redundant when providing --worker-priorities.

And with --maximum-buffer-size 0, then --minimum-buffer-size is required. Were you using 0 for this in all cases?

Note that a worker with priority 98 is supposed to only work jobs with a priority number <= 98 (docs). From the readme:

We use the Linux priority scale - a lower number is more important.

Reproduced.

app.rb:

require 'bundler/setup'

require 'sequel'
require 'que'
require 'logger'

DB = Sequel.connect(ENV['DATABASE_URL'])
Que.connection = DB
LOGGER = Logger.new(STDOUT)

class MyJob < Que::Job
  def run
    LOGGER.info("Running MyJob - priority: #{que_attrs[:priority]}")
  end
end
# Reset database
➜ docker ps -aq | xargs -r docker stop && docker container ls -aq | xargs -r docker rm && docker volume rm -f que_db-data && docker-compose up -d db

# Migrate database
➜ export DATABASE_URL=postgres://que:que@localhost:5697/que-test
➜ ruby -r bundler/setup -r que -r sequel -e "Que.connection = Sequel.connect(ENV['DATABASE_URL']); Que.migrate!(version: 5)"

# Enqueue jobs
➜ ruby -r bundler/setup -r ./app.rb -e '1000.times { MyJob.enqueue(job_options: { priority: 2 }) }'

# Slow worker process
➜ bundle exec que --worker-priorities 1,2 --poll-interval 5 --minimum-buffer-size 0 --maximum-buffer-size 0 ./app.rb
Que 1.3.1 started worker process with:
  Worker threads: 2 (priorities: 1, 2)
  Buffer size: 0-0
  Queues:
    - default (poll interval: 5.0s)
Que waiting for jobs...
I, [2022-03-03T18:40:17.659926 #663423]  INFO -- : Running MyJob
I, [2022-03-03T18:40:22.692268 #663423]  INFO -- : Running MyJob
I, [2022-03-03T18:40:27.728657 #663423]  INFO -- : Running MyJob
I, [2022-03-03T18:40:32.758936 #663423]  INFO -- : Running MyJob
^C

➜ bundle exec que --worker-priorities 2,2 --poll-interval 5 --minimum-buffer-size 0 --maximum-buffer-size 0 ./app.rb
Que 1.3.1 started worker process with:
  Worker threads: 2 (priorities: 2, 2)
  Buffer size: 0-0
  Queues:
    - default (poll interval: 5.0s)
Que waiting for jobs...
I, [2022-03-03T18:41:24.465561 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.467681 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.519040 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.519249 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.574609 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.574915 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.629634 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.629952 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.684866 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.685153 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.740311 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.740582 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.793546 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.793708 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.846446 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.846705 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.900925 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.901219 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.954279 #669174]  INFO -- : Running MyJob
I, [2022-03-03T18:41:24.954488 #669174]  INFO -- : Running MyJob
^C

➜ bundle exec que --worker-priorities 2,3 --poll-interval 5 --minimum-buffer-size 0 --maximum-buffer-size 0 ./app.rb
Que 1.3.1 started worker process with:
  Worker threads: 2 (priorities: 2, 3)
  Buffer size: 0-0
  Queues:
    - default (poll interval: 5.0s)
Que waiting for jobs...
I, [2022-03-03T18:42:26.786848 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:26.789325 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:26.840985 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:26.841253 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:26.895775 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:26.895993 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:26.950753 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:26.951034 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:27.006818 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:27.007044 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:27.061378 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:27.061728 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:27.114400 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:27.114722 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:27.166990 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:27.167341 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:27.220479 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:27.220697 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:27.273810 #673945]  INFO -- : Running MyJob
I, [2022-03-03T18:42:27.274160 #673945]  INFO -- : Running MyJob
^C

Curiously, it makes no difference whether the worker with the smaller priority number has any jobs to process; the higher number one still sleeps for the poll interval between jobs:

➜ ruby -r bundler/setup -r ./app.rb -e 'DB[:que_jobs].truncate; [1, 2].each { |priority| 10.times { MyJob.enqueue(job_options: { priority: priority }) } }'

➜ bundle exec que --worker-priorities 1,2 --poll-interval 5 --minimum-buffer-size 0 --maximum-buffer-size 0 ./app.rb                                       
Que 1.3.1 started worker process with:
  Worker threads: 2 (priorities: 1, 2)
  Buffer size: 0-0
  Queues:
    - default (poll interval: 5.0s)
Que waiting for jobs...
I, [2022-03-03T18:51:54.260274 #722139]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T18:51:54.264529 #722139]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T18:51:54.314309 #722139]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T18:51:54.314617 #722139]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T18:51:54.369292 #722139]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T18:51:54.369491 #722139]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T18:51:54.424902 #722139]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T18:51:54.425268 #722139]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T18:51:54.480069 #722139]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T18:51:54.480357 #722139]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T18:51:54.535363 #722139]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T18:51:59.564773 #722139]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T18:52:04.594022 #722139]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T18:52:09.623441 #722139]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T18:52:14.655504 #722139]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T18:52:19.684734 #722139]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T18:52:24.710064 #722139]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T18:52:29.736994 #722139]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T18:52:34.767277 #722139]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T18:52:39.789264 #722139]  INFO -- : Running MyJob - priority: 2
^C

I assume you meant --worker-count; but that's actually redundant when providing --worker-priorities.

Yep, sorry. Yes, it is redundant when --worker-priorities is provided but the same behaviour occurs when --worker-count is greater than the number of explicitly provided priorities and the remainder of the workers default to any, e.g. --worker-count 10 --worker-priorities 98,99

And with --maximum-buffer-size 0, then --minimum-buffer-size is required. Were you using 0 for this in all cases?

Sorry, should have actually copied the options I used rather than writing them down from memory. Yes, minimum buffer size was 0 in all cases. Pretty sure it isn't used so I doubt changing it will have an impact.

The behaviour of at least my last command is the same in all versions back to and including 1.0.0.beta4 (priority job option top-level'd when appropriate). So we haven't broken it at least 😅

In v1.0.0.beta3, no jobs are executed unless the minimum buffer size is >= 1:

➜ ruby -r bundler/setup -r ./app.rb -e 'DB[:que_jobs].truncate; [1, 2].each { |priority| 10.times { MyJob.enqueue(priority: priority) } }' && bundle exec que --poll-interval 5 --minimum-buffer-size 0 --maximum-buffer-size 0 ./app.rb
Que waiting for jobs...
^C

➜ ruby -r bundler/setup -r ./app.rb -e 'DB[:que_jobs].truncate; [1, 2].each { |priority| 10.times { MyJob.enqueue(priority: priority) } }' && bundle exec que --poll-interval 5 --minimum-buffer-size 0 --maximum-buffer-size 1 ./app.rb
Que waiting for jobs...
^C

➜ ruby -r bundler/setup -r ./app.rb -e 'DB[:que_jobs].truncate; [1, 2].each { |priority| 10.times { MyJob.enqueue(priority: priority) } }' && bundle exec que --poll-interval 5 --minimum-buffer-size 1 --maximum-buffer-size 1 ./app.rb
Que waiting for jobs...
I, [2022-03-03T19:15:22.827060 #840663]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T19:15:22.827585 #840663]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T19:15:22.827990 #840663]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T19:15:22.829635 #840663]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T19:15:22.830213 #840663]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T19:15:22.831487 #840663]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T19:15:22.832133 #840663]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T19:15:22.879475 #840663]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T19:15:22.879693 #840663]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T19:15:22.879923 #840663]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T19:15:22.880835 #840663]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T19:15:22.880441 #840663]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T19:15:22.881662 #840663]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T19:15:22.882907 #840663]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T19:15:22.935141 #840663]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T19:15:22.935566 #840663]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T19:15:22.935974 #840663]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T19:15:22.936237 #840663]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T19:15:22.936745 #840663]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T19:15:22.937618 #840663]  INFO -- : Running MyJob - priority: 2
^C

So we haven't broken it at least 😅

I remember seeing this behaviour when running que with the default config when we first used it for lyrebird. Can’t remember what version that was but I think it was a 1.0.0.beta. I thought it was just the way the poller worked. I thought that the behaviour had been fixed in some more recent version but it turns out we just weren’t seeing it because of the way that we configure our workers.

Looks like we started using it at v1.0.0.beta3

Note that a worker with priority 98 is supposed to only work jobs with a priority number <= 98

Yeah, I swear I was seeing two 99 jobs run each poll interval with worker priorities of 98,99 but that doesn’t make sense. I’ll check.

Curiously, it makes no difference whether the worker with the smaller priority number has any jobs to process; the higher number one still sleeps for the poll interval between jobs

Remember that there is only one locker and one poller for the whole process, so the workers share a poller. Something about the fact that there are no jobs for one of the workers is causing the poller to either not return any jobs or poll less frequently. I suspect the latter, probably something to do with the last_poll_satisfied in the poller.

Something about the fact that there are no jobs for one of the workers

But that's just it - in my test, there are jobs suitable for both workers. Are you perhaps meaning jobs that are actually picked up?

But that's just it - in my test, there are jobs suitable for both workers.

I think we’re interpreting your results slightly differently. I reckon BOTH workers are working the p1 jobs and when they are all done then no jobs are returned for the p1 worker on the next poll which sets last_poll_satisfied to false, meaning that the next poll is not immediately but 5 seconds later.

Assumption: higher priority jobs are prioritised over lower priority jobs even by lower priority workers, so the priority is used both by the poller and within the buffer. I think this could be right, but not 100% sure.

Edit: actually, you’re not using a buffer so maybe this only works this way because the poller is loading up p1 jobs in preference to p2 jobs for both workers.

Yeah, I think that unless enough jobs are found to satisfy ALL (rather than ANY) priorities of worker the poller will wait until the poll interval has elapsed before polling again.

So if we have 3 p1 workers and 5 p2 workers waiting for jobs and there are only 2 p1 jobs ready to be worked then we will see the polling slow down, even if there are 7 million p2 jobs.

Looks like you're right!

➜ ruby -r bundler/setup -r ./app.rb -e 'DB[:que_jobs].truncate; [1, 2].each { |priority| 10.times { MyJob.enqueue(job_options: { priority: priority }) } }' && bundle exec que --worker-priorities 1,2 --poll-interval 2 --minimum-buffer-size 0 --maximum-buffer-size 0 --log-level debug --log-internals ./app.rb | grep -E '(newly_locked|Running MyJob)'
I, [2022-03-03T21:17:01.680660 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:01.680570Z","queue":"default","locked":2,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294490,1655071233245294491]}
I, [2022-03-03T21:17:01.684802 #1468504]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T21:17:01.689963 #1468504]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T21:17:01.736982 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:01.736897Z","queue":"default","locked":2,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294492,1655071233245294493]}
I, [2022-03-03T21:17:01.740081 #1468504]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T21:17:01.740355 #1468504]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T21:17:01.792540 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:01.792454Z","queue":"default","locked":2,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294494,1655071233245294495]}
I, [2022-03-03T21:17:01.795421 #1468504]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T21:17:01.795866 #1468504]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T21:17:01.848702 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:01.848615Z","queue":"default","locked":2,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294496,1655071233245294497]}
I, [2022-03-03T21:17:01.851805 #1468504]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T21:17:01.852092 #1468504]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T21:17:01.904769 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:01.904686Z","queue":"default","locked":2,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294498,1655071233245294499]}
I, [2022-03-03T21:17:01.908140 #1468504]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T21:17:01.908192 #1468504]  INFO -- : Running MyJob - priority: 1
I, [2022-03-03T21:17:01.962151 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:01.962063Z","queue":"default","locked":1,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294500]}
I, [2022-03-03T21:17:01.964440 #1468504]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T21:17:03.989532 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:03.989444Z","queue":"default","locked":1,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294501]}
I, [2022-03-03T21:17:03.991831 #1468504]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T21:17:06.018296 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:06.018252Z","queue":"default","locked":1,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294502]}
I, [2022-03-03T21:17:06.020165 #1468504]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T21:17:08.044354 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:08.044287Z","queue":"default","locked":1,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294503]}
I, [2022-03-03T21:17:08.046438 #1468504]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T21:17:10.071469 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:10.071360Z","queue":"default","locked":1,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294504]}
I, [2022-03-03T21:17:10.073842 #1468504]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T21:17:12.100891 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:12.100802Z","queue":"default","locked":1,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294505]}
I, [2022-03-03T21:17:12.103263 #1468504]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T21:17:14.128802 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:14.128718Z","queue":"default","locked":1,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294506]}
I, [2022-03-03T21:17:14.131393 #1468504]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T21:17:16.155640 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:16.155550Z","queue":"default","locked":1,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294507]}
I, [2022-03-03T21:17:16.157353 #1468504]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T21:17:18.181780 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:18.181698Z","queue":"default","locked":1,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294508]}
I, [2022-03-03T21:17:18.184393 #1468504]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T21:17:20.209616 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:20.209560Z","queue":"default","locked":1,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[1655071233245294509]}
I, [2022-03-03T21:17:20.211325 #1468504]  INFO -- : Running MyJob - priority: 2
I, [2022-03-03T21:17:22.236063 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:22.235974Z","queue":"default","locked":0,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[]}
I, [2022-03-03T21:17:24.259737 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:24.259647Z","queue":"default","locked":0,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[]}
I, [2022-03-03T21:17:26.282041 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:26.281952Z","queue":"default","locked":0,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[]}
I, [2022-03-03T21:17:28.306518 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:28.306422Z","queue":"default","locked":0,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[]}
I, [2022-03-03T21:17:30.332852 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:30.332768Z","queue":"default","locked":0,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[]}
I, [2022-03-03T21:17:32.356840 #1468504]  INFO -- : {"lib":"que","hostname":"ZimbiX-GreenSync","pid":1468504,"thread":2340,"internal_event":"poller_polled","object_id":2400,"t":"2022-03-03T10:17:32.356766Z","queue":"default","locked":0,"priorities":{"2":1,"1":1},"held_locks":[],"newly_locked":[]}
^C

Jobs are requested based on the count of each priority. The Poller sets @last_poll_satisfied to whether the number of job IDs in newly_locked equals the sum of the priorities hash's values (whether as many jobs as possible were returned for all priorities).

I think we need to change this to be whether the number of jobs returned for any priority matches the corresponding priority count.

I've just finished writing some tests for this; working on implementation now