
With hard drives, sequential write speed is fast and random writes are slow. So the more jobs are run in parallel, the slower the data will be written to disk, and if we just set 1 job to run then a lot of time will be spent on waiting for snapshots etc.
So my idea is. If Veeam had a feature where it would take all data from the jobs and write it all in sequence to one big file, it would be fast. (job1block|job2block|job1block|...)
Afterwards it could use block clone to split it into different backup files. This should also be quick since it's just meta data.
As backups are taken and deleted the storage will get fragmented, so it will be important to keep the free space collected and performant (defrag /x).
And lastly since the amount of parallel jobs doesn't matter anymore, an additional feature where Veeam would automatically start another job if there is no bottleneck.
This might not be useful for all storage/solutions, will require a lot of work and testing, and I might not have thought everything through, however if this is possible then it would give better performance for hard drives.