Host-based backup of VMware vSphere VMs.
Post Reply
itdirector
Enthusiast
Posts: 59
Liked: 3 times
Joined: Jan 19, 2012 8:53 pm
Full Name: friedman

Dual 10Gbe nics, slow NBD traffic

Post by itdirector »

We currently run ESXi 5.0 vSphere Essentials & Veeam 7.x essentials. 3 Dell R730 servers. All local DAS, with 7TB of Intel S3500 SSD in each. Dual 10Gbe nics on each server, all connected to a Dell 10Gbe switch. No SAN, no NAS, no vMotion. Simple DAS on each server.

Veeam full backups & full replications between all hosts using hotadd is very fast (proxies on each host): averaging 850MB/s+, almost maxing out the 10Gbe switch. Copying files between two Win2k12R2 Virtual machines on different R730 server hosts is similarly fast : averaging 700MB/s+; so we know the SSD's are not a bottleneck. Again, that is MB, not Mb, & that is between hosts.

However, when using Veeam backup's or replication with NBD on the Proxies instead of Hotadd, speeds drop significantly to around 100MB/s; almost as if it is limiting itself. I am assuming when using NBD, the proxies communicate with the host via the esxi "management network"; so given that when we migrate a VM between hosts, we get the same slow speeds (100MB/s), the management network seems to be the bottle neck.

What doesn't make sense is that we only have one vSwitch configured on each host, with dual 10Gbe nics on the vSwitch. So the proxies above, all of the VM's & the the Management network are all on the same vSwitch, on the same 10Gbe physical switch, using the same 10Gbe nics.

I wanted to see if anyone else has a similar setup & can point me in the right direction to removing the management network bottle neck in our environment so we can switch from hotadd to NBD with veeam, as well as faster migration between hosts with vCenter.

Thank you.
VladV
Expert
Posts: 224
Liked: 25 times
Joined: Apr 30, 2013 7:38 am
Full Name: Vlad Valeriu Velciu
Contact:

Re: Dual 10Gbe nics, slow NBD traffic

Post by VladV »

We have seen the same behavior. Every disk processed gets limited to 100 MB/s so when processing multiple disks/jobs in parallel you can reach higher speeds. We managed to get speeds around 300MB/s but when processing a VM with a single large disk we are limited to 100MB/s.

Tried many things but we couldn't find the cause. So we switched to hot add for large VMS that aren't processed in parallel.

I would like to know what is causing this also.
itdirector
Enthusiast
Posts: 59
Liked: 3 times
Joined: Jan 19, 2012 8:53 pm
Full Name: friedman

Re: Dual 10Gbe nics, slow NBD traffic

Post by itdirector »

Anyone else with this setup that can shed some light & point Vlad & I in the right direction?
id.elizarov
Novice
Posts: 5
Liked: 1 time
Joined: Oct 07, 2014 6:56 am
Contact:

Re: Dual 10Gbe nics, slow NBD traffic

Post by id.elizarov »

I guess, limitations in NFC/NBD, such as "...the sum of all NFC connection buffers to an ESXi host cannot exceed 32MB" can cause this behaviour.

https://pubs.vmware.com/vsphere-50/inde ... t.5.5.html
dellock6
Veeam Software
Posts: 6137
Liked: 1928 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Dual 10Gbe nics, slow NBD traffic

Post by dellock6 »

I'm not sure about this setting being the limit for NFC/Network Mode transfer limits, otherwise it would be hard to explain with Network Mode over 10Gb is so much faster than 1Gb, and definitely can go way above the limit of 1Gbs with a 10Gb cards.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Post Reply

Who is online

Users browsing this forum: No registered users and 75 guests