VeeamHub/powershell

vb365-jobmanager.ps1 Super Long Execution Times - Is It Necessary?

Closed this issue · 2 comments

Description
I have discovered a line in this script that retrieves objects in a managed job and converts them to ManagedObject:class. This does not seem to then be utilised anywhere else in the code but is taking up the majority of execution time (3 hours out of 3.5 hours). I have commented this out in my implementation of this script, but I thought I would get input from the script author to better understand why he is using this and if it affects any other part of the script.

Code Snippet

[ManagedObject[]] GetManagedObjects() {

    #TODO: Is there any need to get the weight at this point for already present objects?
    
    $this.LoadWeightTable()
    
    $VBOBackupItems = Get-VBOBackupItem -Job $this.VBOJob

    ## Check if item is in weighttable and add as managed object with it's weight

    $objects = $VBOBackupItems | ForEach-Object { [ManagedObject]::new($_) }
                
    return $objects
}

Line of Concern
$objects = $VBOBackupItems | ForEach-Object { [ManagedObject]::new($_) }

Hi Mako,

this line is important as it returns the weight of the object. This is what the sizing of the jobs is based on. Weight calculation is only really relevant on SharePoint sites though as these can be quite different (dependent on the amount of subsites).

The method doing this calculation (and taking the amount of time) is

        [int] GetWeight() {
            if ($this.VBOBackupItem.Type -eq "Team") {
                return $global:countTeamAs
            } else {
                if ($global:recurseSP) { 
                    return (Get-VBOOrganizationSite -Organization $global:org -URL ([Veeam.Archiver.PowerShell.Model.BackupItems.VBOBackupSite] $this.VBOBackupItem).Site.URL -Recurse).Count
                } else {
                    return 1
                }
            }
        }

As you can see it's all depending on the $global:recurseSP parameter, otherwise weight will be calculated based on static values (1 for a SPO site, or the value set for the team (3 by default)). The recursing on the sites is taking a long time.

Are you using the -recurseSP parameter on your script-call? If yes, then just remove it and you should have reduced runtime with the same result as when you comment the line as you described.

If the weight is not calculated you might end up with a SPO site added to a job which has a hundred subsites, so the job would work on 101 objects instead of the estimated 1 object and thus the object-based separation of jobs would not work as expected. When the sites are recursed only the main site will be added to a job, but it will be calculated with a weight of 101 (1 main site + 100 subsites) and thus the next job will be filled earlier.
With the parameter and the weight calculation a job limit of 500 should give you a job that backs up 500 objects, without the weight calculation (or without the recurse parameter) you might end up with a job backing up 601 objects while you configured a limit of 500.

If you know that you aren't using subsites (or almost no subsite) you can remove the recurseSP parameter, too as it won't have much affect on the jobs (but you might see something like 510 objects per job backed up even if you have a 500 object limit).

I hope that explanation makes sense.

Got it @StefanZ8n. Thank you for the quick and comprehensive reply.