We have a single-host cluster at one of our sites that has an extremely slow backup time compared to our others that also use Veeam (albeit with better storage hardware). There are only 3 VMs running on thishost: DC/File Server, Print server, Veeam Backup server. The print server has a single 40gb disk, and the DC has a 40gb OS disk and 150gb file disk. Just backing up these two VMs alone takes almost 2 hours.
These two steps on the DC seem to be most of the time, does anyone know what Veeam is actually doing here or what could be causing it to take so long?
It also seems to sit at 99% processing for 15+ minutes after completing all actions on the DC.
That job was originally created in Network mode before I switched it to Appliance. I went ahead and just deleted the backup file and had it create a new full backup. The first run took about 2 hours again, but the incremental last night took 5 minutes total!
Here are those same disks:
Maybe it was having trouble with the CBT comparison against a backup chain not originally created with hot-add? Anyway it's working like I expected now.
You may be able to check the tasks at vSphere level related to "Reconfigure Virtual machine" as it goes to HotAdd VM's disks to Proxy for processing. The conflicting tasks like vMotion/Snapshot operations would put the "Hot Add" taks sinto the queue, causing the wait time.