Virtual Windows 2008R2 Server on ESXi (5.5U2) 10Gb host also hosting Veeam proxy
- Two vNics, one on ESXi Management subnet, one on Backup Network subnet
- Both nics in one vSwitch
- Multiple vKernels for each vlan for each traffic type
- Traffic types/vlans are: ESXI Management, vMotion, iSCSI, Production Network, Backup Network, each with it's own respective vlan
- 10.10.10.X/24 - ESXi Management
- 172.16.X.X/24 - vMotion
- 172.24.X.X/24 - iSCSI
- 10.10.20.X/24 - Prod network
- 10.10.30.X/24 - Backup network
Exagrid EX13000
- 10Gb Nics on Backup network
- Configured in Veeam B&R console as a de-dupe appliance using Exagrid DataMover
- Please don't point fingers at the SAN, I can do storage vMotions all daylong and see sustained 8Gbps so I can't see the SAN bottlenecking me down to 94MBs
After my first test, I'm not seeing speeds any better than I did before when I was running solely on a 1Gb network. I'm not expecting to see anything near 10GB of course, but I figured I would see something better. Based on that setup, should I reconfigure or adjust something? I never seem to get a straight answer out of support (Veeam or Exagrid) on how I should connect/configure what. Fair enough, there's a lot of variables and possible hardware combinations so I know there isn't a 'one size fits all' solution out there.
The results from my first test were:
Processing rate: 94MBs
Bottleneck: Source - Bottleneck
Busy: Source 98% > Proxy 30% > Network 17% > Target 0%
When NBD goes through the hypervisor to access the storage, does it interact with vCenter in any way or is it straight from the B&R server to the ESXi host that the guest targeted for backup is running on?
I imagine tomorrow I'll start trying some different combinations of what works best where, but any suggestions or insight would be great. My largest single backup is 5TB, total full backups run around the 10TB range.