Host-based backup of VMware vSphere VMs.
Post Reply
backupquestions
Expert
Posts: 186
Liked: 22 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Hot add and nbd on 10gb network

Post by backupquestions »

I've got an all ssd vsan VMware 6.5 environment.

All networking is 10gb from esxi management to my physical veeam server to my vms including one vm proxy server for veeam.

So I have tested nbd backups and found that if I use per vm files I get lots of write streams and I can get 500 or 600MB per sec which is wonderful.

Problem is, each stream is only 100MB per second and I understand VMware seems to limit the management traffic.

So for backups it is ok as I can still utilize 10gb by multiple vms at once backing up as mentioned above.

So what I really want is to find a way to leverage 10gb for RESTORE. Unfortunately it seems a restore which is usually only one vm at a time runs at only say 140MB per sec which is above 1gb but not much.

I read about using hot add with my vm proxy and tried doing that for restore. I am still only getting 140MB per second or somewhere around there regardless. I've tested using proxy and restored vm all on same esxi host and no more speed. Proxy vm has vmxnet3 nice connected at 10gb. Remember all drives are ssd as well. Bottleneck shows proxy but proxy has 4 CPU and 8gb ram. Plenty for the 4 tasks I assigned to it.



So questions below..

1. All networking is 10gb and flash ssd backed. Why is hot add slow?

2. Would putting proxy vm on same subnet/vlan as esxi management nics help? It's all 10gb regardless and the routing hop surely doesnt hurt much. The job does warn that "proxy not on esxi management subnet so performance may suffer" though. I cant see that holding it back so much.

3. I can copy a 4gb iso file from physical veeam server to proxy vm at 500MB per second so that shows networking is fine. Just using windows file copy.

So overall I'm happy with backup speed as I'll just ensure to process multiple vm at one time. But when the day comes to restore one vm as fast as possible this isnt great as its barely above 1gb network speeds. Despite very fast hardware all around.

One more observation...

The veeam restore entire vm jobs seem to articificially report higher processing rate at the end of the job too. As example I did a full restore of a small vm and it restored 7gb of data. It bases the speed off of total vmdk size and time that the job finished saying 500MB per sec yet the statistics page shows true processing speed of 140MB per second and the time to finish lines up with 140MB per second. Why does it not calculate based off of true processing speed? It should ignore total vmdk size as it is not restoring anywhere near that amount of data of course.

My only other thoughts are just relying on "quick rollback reverse cbt" feature and or instant recovery. This way I dont need to worry about restoring so much data... Quick rollback seems better than instant recovery in most scenarios too.
HannesK
Product Manager
Posts: 14839
Liked: 3085 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Hot add and nbd on 10gb network

Post by HannesK »

Hello,
how many disks are you restoring in parallel? One big or several smaller ones?

About the "why": support can test with the vixdisklib tool a restore with "plain VDDK". Without Veeam components if you are really interested.

Best regards,
Hannes
Andreas Neufert
VP, Product Management
Posts: 7077
Liked: 1510 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Hot add and nbd on 10gb network

Post by Andreas Neufert »

Please check if there is a firewall between Proxy and Repository if you have RPC packet inspection disabled. If not disabled it could interrupt our management connections.

NBD performance is usually up to 150MB/s per stream and in summary around 250-350MB/s per host with multiple streams.
Potentially faster with latest ESXi versions as they have options for async processing.

Question when you say you have installed proxy on a VM, do you used HotAdd mode there? It is significantly faster processing when underlining disk is faster. You can test this by copying a big file from one VM disk to another on one of your normal VMs (like a 4GB Server iso). Some storage systems with enabled dedup have pretty limited single write stream performance.

As well please check if you use network cards that are use for multiple purposes and specific type of connection will get only 10% speed. Happens usually on blade systems where you have Storage and Network traffic running through same NICs.
backupquestions
Expert
Posts: 186
Liked: 22 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Hot add and nbd on 10gb network

Post by backupquestions »

Hannes,

I haven't thought about that. So if I had a VM with only one disk, that is one stream. If I had a vm with 3 disks and did a full vm restore, would it use three streams to restore the vm giving me maybe 300MB/sec?

Andreas,

I will verify this, thank you regarding firewall.

I am seeing the good performance with multiple streams when backing up. How can I get multiple streams when restoring one VM? In most scenarios when you need to do a restore it will be one VM and I want that VM restored fast. As I asked Hannes above, would one VM with several disks mean one stream per disk so long as the proxy and repository are configured for enough concurrent "tasks"? I guess a way around the issue is to use instant recovery or better yet "quick rollback reverse cbt" right? Assuming I needed to do a full vm restore that is.

On the proxy vm, yes I have it set for hotadd mode and it is showing me this in the statistics during the backup/restore jobs. As mentioned in my opening post I did copy a 4gb iso exactly and it went at 500MB/sec speed and that is just one stream I would think, just a windows copy and paste.
backupquestions
Expert
Posts: 186
Liked: 22 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Hot add and nbd on 10gb network

Post by backupquestions »

Also does anyone have any comparison example for what asynchronous NBD would give me over 10gb? Does that mean I could get more than 150MB/sec per stream? It looks like this is vsphere 6.7 to get this. I will update for sure asap if this would increase single stream performance for restores.
Post Reply

Who is online

Users browsing this forum: CurtisC, Google [Bot], Ilya, Semrush [Bot] and 71 guests