I posted here in July. http://forums.veeam.com/vmware-vsphere- ... 22654.html
Basically Backup of main server was taking over 50 hours and Exchange was getting taken offline for about 24hrs during the backup.
We are on ESXi 5.0.0, with 3 virtual servers. Veeam (B&R 6.5) is installed on one of those virtual servers and backups are being done to a USB 2.0 attached to the host server.
After getting posting, I setup an old server and added it as a proxy and repository for backups. Was pretty excited to possibly resolve these backup issues.
I completed a Full of another server (not the main referenced in first post/above). The full took just over 6hrs last night to the old server. The full normally takes 2.5hrs on the USB.
Quite dissapointed to find this took longer.

USB Method
29/08/2014 8:45:22 PM :: Job started at 8/29/2014 8:45:08 PM
29/08/2014 8:45:22 PM :: Building VM list
29/08/2014 8:45:44 PM :: VM size: 200.0 GB (193.8 GB used)
29/08/2014 8:45:44 PM :: Changed block tracking is enabled
29/08/2014 8:45:59 PM :: Preparing next VM for processing
29/08/2014 8:45:59 PM :: Processing 'xxxx
29/08/2014 11:17:58 PM :: All VMs have been processed
29/08/2014 11:17:59 PM :: Removing 'G:\Backups\xxxx.vrb' per retention policy
29/08/2014 11:18:03 PM :: Load: Source 33% > Proxy 19% > Network 9% > Target 94%
29/08/2014 11:18:03 PM :: Primary bottleneck: Target
29/08/2014 11:18:03 PM :: Job finished at 8/29/2014 11:18:03 PM
Old Server
8/09/2014 10:00:28 PM :: Job started at 9/8/2014 10:00:13 PM
8/09/2014 10:00:29 PM :: Building VM list
8/09/2014 10:00:50 PM :: VM size: 200.0 GB (193.8 GB used)
8/09/2014 10:00:50 PM :: Changed block tracking is enabled
8/09/2014 10:01:11 PM :: Preparing next VM for processing
8/09/2014 10:04:45 PM :: Processing 'xxxx'
9/09/2014 4:16:16 AM :: All VMs have been processed
9/09/2014 4:16:23 AM :: Load: Source 94% > Proxy 6% > Network 12% > Target 45%
9/09/2014 4:16:23 AM :: Primary bottleneck: Source
9/09/2014 4:16:23 AM :: Job finished at 9/9/2014 4:16:23 AM
It looks to be better I think via the load as the bottleneck is no longer the Target, but it doesn't account for why it takes twice as long.
The incremental for the server going to the USB finished at 10:04pm last night, so there was a little overlap. I have set the incremental to run to the old server at 11pm so their aren't processing at the same time. Both jobs have identical settings, although I changed the new job to be optimised for LAN Target (the USB job is Local storage).
Have I not added the server properly as a proxy or a repository?
Are the specs of the old server worse than doing via USB passthrough?
Any help would be appreciated. Thanks.