
Here is the business case:
We use vSphere tags to map VMs to jobs. We have several tags like this: "Backup-Repo1-24H-2330-STD" (backup to repo 1, every 24h, starting at 23:30, no app processing/logs). All VMs with that tag are mapped to the corresponding Veeam Backup Job.
The problem is that with that kind of tags, there is no VM ordering possible: In Veeam job properties, there is only 1 tag and not the full list of VMs. For us it is easy to reach jobs with more than 150 VMs. Veeam start processing the VMs in an undefined order, which could lead to process the 140 VMs 15GB big first and then backup the 2-3 multiTB servers last which may exceed our backup window during an active full.
With parallelism we are able to achieve 2-3GB/s typical in aggregated backup speed. But with 1 big disk alone (i.e. 2TB disk), it is much slower (no parallelism, 130MB/s approx. best case, 2h10 min/TB).
I think it will be interesting to have the option to "start processing VMs by size, big VMs first" (and the option for small VMs first too) which I think should be easy to implement because you need to query vCenter's infrastructure anyway to get the VMs with a specific tag. Sorting them by their expected read size would be better, but much more complicated to implement IMHO because you will need to read the CBT for each VM and it will not be not worth it in most scenarios.
With that option, the big VM will start doing it's 4 hours backup and in parallel the rest of VMs will also be processed.
There are some workarounds (several tags to create several groups for the same job) but it complicates the system and makes the deployment and reporting less clean. We do not want to map VMs directly to jobs because we do not want to give vCenter administrators access to administer Veeam servers (role segmentation)
Thanks and best regards.