Every now and then it happens so, that the last job/disk in your backup job is the biggest one. And if you have more disks than you have cpus/cores, it might just start after everyting else is finished.
Bored staring at the job process window, I started thinking why not optimize the job queue inside the backup job?
Let's presume I have 4cores on my proxy and I can handle 4 concurrent jobs.
("o" represents for example 10GB of data / 1min, [nxo] represents a vdisk)
"abcd"/"random" job queue could sometimes be like this

cpu1: [ooooo][ooooo][ooooo][ooooo][oooooooooooooooooooo] 40min
cpu2: [ooooo][ooooo][ooooo][ooooo][ooooooooooooooo] 30min
cpu3: [ooooo][ooooo][ooooo][ooooo] 20min
cpu4: [ooooo][ooooo][ooooo][ooooo] 20min
:: total time for job 40min
"biggest first" job queue:
cpu1: [oooooooooooooooooooo ][ooooo][ooooo] 30min
cpu2: [ooooooooooooooo ][ooooo][ooooo][ooooo] 30min
cpu3: [ooooo][ooooo][ooooo][ooooo][ooooo][ooooo] 30min
cpu4: [ooooo][ooooo][ooooo][ooooo][ooooo] 25min
:: total time for job 30min
So optimizing jobs queue might just save a some time. This example is for the first time active full backup, but I guess it could also be useful on incremental backups (read the biggest changes first).