I have a customer migrating from Hyper-V with FC storage to VMware vSAN. The backup server and the vSAN nodes have 2x25 Gbit connections to core network with 80 Gbit connection between the switches.
Backup data is stored on a local ReFS volume on the backup server. Backup from vSAN is performed using proxies giving a speed in excess of 500 MByte/s per job.
Using vPower NFS to "restore" a large server to vSAN gives a sustained speed of about 800 Mbit/s with some spikes up to 5 GBit/s. Very low speed given the infrastructure, and the backup speed.
The ESXi server has a vmkernel port in the same subnet as the backup server, and on the backup server there is a hosts file entry for the ESXi host pointing to that IP.
Is there a limit on the vPower NFS service?
Any suggestions on how to increase the speed is appretiated.
-
- Enthusiast
- Posts: 65
- Liked: 4 times
- Joined: Oct 06, 2016 1:19 pm
- Contact:
-
- Product Manager
- Posts: 14840
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: vPower NFS speed
Hello,
just to make sure: you are using the latest version of VBR?
There are some registry keys around instant recovery that might help. I suggest to contact support with a performance case and please post the case number for reference.
I know that storage vMotion with instant VM recovery can be time consuming. There are limitations (like in any piece of software), but hard to say whether you hit them without having support to check the logs.
Thanks,
Hannes
just to make sure: you are using the latest version of VBR?
There are some registry keys around instant recovery that might help. I suggest to contact support with a performance case and please post the case number for reference.
I know that storage vMotion with instant VM recovery can be time consuming. There are limitations (like in any piece of software), but hard to say whether you hit them without having support to check the logs.
Thanks,
Hannes
-
- Enthusiast
- Posts: 65
- Liked: 4 times
- Joined: Oct 06, 2016 1:19 pm
- Contact:
Re: vPower NFS speed
The customer is on VBR 11, 11.0.0.837, so new enough that it should'nt be a problem.just to make sure: you are using the latest version of VBR?
What are these registry keys, and what do they do?There are some registry keys around instant recovery that might help. I suggest to contact support with a performance case and please post the case number for reference.
The installation is on an offline system and exporting logs is not an option.
I actually found Quick Migration, forcing Veeam transport, to be faster than storage migration.I know that storage vMotion with instant VM recovery can be time consuming. There are limitations in the engine, but hard to say whether you hit them without having support to check the logs.
There are proxy VMs on each host of the vSAN cluster, used primarily for backup. I have used one for "restore" as destination proxy. It is located on the host where the Veeam NFS datastore is mounted. Should the same VM be used as source proxy as well? Or should the server with the data be source?
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: vPower NFS speed
There is no limit on the vPower NFS service, but of course there is some overhead. Just to provide some reference I did a quick test in my lab. Did an instant restore of a 100GB VM that was filled with mostly compressed data and then performed an svMotion and the VM disks transferred in about 5 minutes and sustained ~250MB/s with peak speeds to 300MB/s. Not super fast seeing that these are very fast disks, but much better than the speeds you are seeing. For comparison I created a datastore on the same disk pool that the repository lives on and did an svMotion of that same disk, without vPower involved and without having to read from backups, and performance of svMotion was ~350MB/s sustained with spiked to 450MB/s.
That of course shows that vPower NFS adds some overhead, but this is expected as it is an additional hop, adding latency, and it had to read from a backup file, rather than directly from disks. Overall, I thought those performance numbers were pretty decent even if my hardware is capable of much more for backups and full on restores.
Keep in mind that storage vMotion is not really designed to transfer VMs at maximum throughput, rather, to transfer them while minimizing the impact to the running VM, it's never going to be the fastest way to transfer the data but it definitely seems like the speed your seeing is on the low side. That being said, there's lots of possibilities that could cause it, for example, the speed of the backup disks (if it's just a relatively small number of large HDDs then IOPS will be quite limiting, increasing latency and slowing svMotion). I'd take a look at the read latency on the vPower NFS datastore to get some idea if that's the core issue limiting the throughput. There might not be much that could be done about it, but at least it would give some clue.
Who is online
Users browsing this forum: Bing [Bot] and 35 guests