swar/Swar-Chia-Plot-Manager

Swar keeps plotting even when destination is full (when using parallel jobs). Fix in description

Leemonn opened this issue · 0 comments

Hello, when using parallel plotting Swar will initiate jobs even when the destination disk will be full because of a job that started earlier.
It's easier to understand with an example:

There is a job called "JOB_A", it has 1 drive as destination drive, max_concurrent = 2, max_for_phase1 = 1):

  1. Destination disk is 5.45 tib full of 5.6 tib. It has space for 1 more plot
  2. There is 1 JOB_A active right now (max_concurrent = 2, max concurrent on phase 1 = 1)
  3. As soon as the job finishes phase 1 it will start another JOB_A (because of max_concurrent = 2).
  4. Now there are 2 JOB_A plotting to destination disk, only the first one will be able to copy the finished plot.

How to fix it? (I can't provide code right now because I don't have time but should be easy to implement)
Using the same example as before:
(This is pseudo-code)

With:
plot_size = 101.4 gib
destination_hdd is 5.45 tib full of 5.6 tib space (153.6 gib remaining)
Just like in the example there is 1 active JOB_A for this destination drive

So now the 1st JOB_A has reached phase 2 and the plot manager is checking if it can start another job:

disk_remaining_space = getDiskSpace(destination_hdd) 
  // disk_remaining_space = 153.6 gib

active_jobs = getActiveJobs(destination_hdd) 
  // active_jobs = 1

active_jobs_size = active_jobs * plot_size
  // active_jobs_size = 101.4 gib

remaining_plots_hdd_can_hold = calculateRemainingPlotSpace(plot_size, disk_remaining_space - active_jobs_size) 
  // remaining_plots_hdd_can_hold = 0 (because disk_remaining_space - active_jobs_size = 52.2 gib)
  
if (remaining_plots_hdd_can_hold > 0)
    start_job(config)
else
    disk_is_full(destination_hdd)