The job contains about 280 VMs with a total size of 17 TB and takes about 12 to 17 hours to finish.
Taken from one example:
Bottleneck: Source 49%, Proxy 6%, Network 40%, Target 34%
Processing rate 9 MB/s
Read 209 GB
Transfer 60 GB
The proxy is hardware based with direct (san) access to the VMware datastores.
The Veeam repository is connected to the same hardware via FC.
Because the job takes so much time, no other job is running simultaneously after a couple of hours. At this time the proxy, the repository and the vCenter is basically idling and very little load on the production storage.
The status of VMs that are processed is either 0% or 99% most of the time (30-50 min per VM). The time that is needed for the actual data transfer is relatively small. The throughput is not great, but that is no concern.


"Only" 8 minutes for data processing, but 40 minutes in total from start to finish for this VM as an example.
I haven't done much so far for troubleshooting.
I tried to lower the maximum number of concurrent task, which made it worse.
My next step is to set up a proxy as a VM and use hotadd, in the hope of isolating the cause.
I updated to v10 last week, no change. The job ran fine (3-4 hours) weeks ago, but no changes were made that would explain that behavior (for me).
Any ideas?