Host-based backup of VMware vSphere VMs.
Post Reply
theviking84
Expert
Posts: 119
Liked: 11 times
Joined: Nov 16, 2020 2:58 pm
Full Name: David Dunworthy
Contact:

need for 10gb in the backup path

Post by theviking84 »

If I have a physical server which is a repository. It is connected via 10gb networking.

Then I have a virtual machine which is a proxy. This vm is using hotadd mode to read the vm backup data.

The production datastore with the vms is all flash and 10gb between hosts. It is vsan.

So my questions here are...

1. Does the virtual machine proxy need a 10gb nic attached to it in order to ensure 10gb backup speed possibilities?

2. Lets say that initially I can only get a 1gb link to the proxy vm nic. Will the proxy at least still be able to "READ" the backup data at 10gb due to hotadd mode? It is only the sending to the repo that will be capped down to 1gb right?

3. The management nic ips of the esxi hosts are on 1gb networking. Only the actual vsan datastore where all teh vm data is is on 10gb network. Do I need to move each host management nic to a 10gb network as well? Does this affect hotadd processing speed?

So overall, repo server, proxy server, and vmware esxi management nics all need to have 10gb nics for full effect, but will I still have a benefit of using hotadd proxy since it can maybe read those disks faster, just not send them since its capped? I do use vsphere 6.7 so I think the asynchronous reading comes into play maybe as well?
obwielnls
Novice
Posts: 8
Liked: never
Joined: Nov 21, 2018 2:00 am
Full Name: Bill Owens
Contact:

Re: need for 10gb in the backup path

Post by obwielnls »

If you are talking about the virtual NIC in the VM it doesn't matter. They all run as fast as the underlying hardware allows.
soncscy
Veteran
Posts: 643
Liked: 312 times
Joined: Aug 04, 2019 2:57 pm
Full Name: Harvey
Contact:

Re: need for 10gb in the backup path

Post by soncscy »

Hi David,

1. Yes
2. No, the read speed for the hotadd proxy will be the max read performance the VM itself can sustain. You can test this by manually hot adding a (data filled VMDK) to the proxy manually and using microsoft's diskspd and for the read target, set the disk number of the hot added disk as shown in diskmgmt.msc (it must be a hot added disk since you need to accommodate for the VMware infrastructure -- snapshot a donor VM, then attach its base disk to your proxy and perform the test)
3. If there's no way out of the VMware environment except 1 Gbit, then you're gated at 1Gbit. I'm assuming the 10Gbit is between datastores for the vSAN environment? Or do they have a path out of the vSAN environment to your target repository?

In general, yes, hotadd will be faster than NBD in virtually every situation. Furthermore, NBD shares the VMkernel traffic and limitations, and there are some fairly important limitations for NBD even with 10 Gbit VMkernels (NBD uses the NFC buffers in VMware, which has a max of 48 connections in VMware 6.5/6.7. 7.0u1 can expand this, but it's not magic and tied to host resources. Plus, remember NFC buffer is shared with normal VMware management activities (e.g., vMotion)

So hotadd is definitely a plus.
theviking84
Expert
Posts: 119
Liked: 11 times
Joined: Nov 16, 2020 2:58 pm
Full Name: David Dunworthy
Contact:

Re: need for 10gb in the backup path

Post by theviking84 »

Thank you guys. I will use hotadd for sure then.

We have a 10gb switch which handles the vsan vmkernel traffic only. So currently the vmdk vsan stuff is all 10gb, but the management nics in the esxi hosts are only 1gb.

So do I need to move the "management" esxi vmkernel nics over to 10gb as well? Or is moving just (proxy/repo) enough to get 10gb speed?
Post Reply

Who is online

Users browsing this forum: Baidu [Spider], Semrush [Bot] and 18 guests