CGRU/cgru

job picked up by render-node from different pool

ultra-sonic opened this issue · 8 comments

Hi Timur,

too mitigate the "capacity dilemma" #540 I have written a python script that locks jobs into different pools.
i have a pool for 256 thread machines, one for 192 thread machines and so on..

the script sent this payload to job id 139

{
    "action": {
        "user_name": "xxxxx", 
        "ids": [
            139
        ], 
        "type": "jobs", 
        "host_name": "xxxxxx", 
        "params": {
            "pools": {
                "/berlin/station/32": 99, 
                "/berlin/farm/32": 99
            }
        }
    }
}

still the job is being picked up by render-nodes from pool /berlin/farm/256
any idea why this can happen?

do i have to enable pool solving somewhere?

or do i need to restart afserver after i created all the pools?

Hi Oliver!

This is a priority, not a hosts mask.
/berlin/station/32 has 99 and /berlin/farm/256 has 0, but not disabled.
But, If some pool priority is 100 - zero (not specified) pools will be disabled (may say that works like hosts mask).

Pools solving is always enabled and can't be disabled.
After pools manipulation afserver restart not needed.

Also, I forgot to say, that priority -100 disables pool.

i dont really understand.if i have these pools:

/berlin/farm/32
/berlin/farm/64
/berlin/farm/128
/berlin/farm/256

what would be the payload to render a job on /berlin/farm/32 only and disable the other 3 pools for the jobs?

"/berlin/farm/32": 100

ok...thanks...i assume to render it on 2 pools it would be this?

"/berlin/farm/32": 100,
"/berlin/farm/64": 100

why does a priority of 99 work completly different?

  1. Yes.
  2. To make pools to work like hosts mask within the same priority parameter, to not to add more parameter(s).