Bug on allocating resources based on share value in proportion.go
ocherfas opened this issue · 0 comments
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Created two queues with the following deserved resources:
default:
deserved GPUs: 2
test:
deserved GPUs: 0
first I created a job under default queue and another job under test queue that resuests one GPU
After that default queue shares resources and test queue does not.
I than created another job under default queue. The scheduler first reclaim the resource from test queue. After it terminated it allocate resources to test queue again, and than reclaim it resources and gets into this infinite loop of reclaim and allocate to test queue
What you expected to happen:
Expected that after reclaiming the resource from test queue the new job of default queue will be allocated
kube-batch configuration:
actions: "reclaim, allocate"
tiers:
- plugins:
- name: predicates
- name: proportion
Anything else we need to know?:
I think that the problem is that under proportion.go on ssn.AddQueueOrderFn the return of 1/-1 should be the opposite. The queue that shares the most resources should be first in the queue that allocate.go tries to allocate resources to.
So if (rv share) > (lv share) than the return value should be 1