I was wondering if some additional information can be provided on vPower NFS recommendations and/or best practices. I have configured vPower NFS to be on a iSCSI drive (part of our SAN) connected to our physical backup server running Veeam. What I'm seeing is high latency (approx. 2000 ms) for both read and write after the third VM is started in the SureBackup job. This basically makes the VMs non-responsive (can't login via console, RPC request stop functioning). I know the documetation states the NFS location needs to be 100GB or more for write operations. Any other information?
Thanks
-
- Influencer
- Posts: 15
- Liked: never
- Joined: Apr 27, 2011 3:13 pm
- Contact:
-
- Expert
- Posts: 144
- Liked: never
- Joined: May 06, 2010 11:13 am
- Full Name: Mike Beevor
- Contact:
Re: vPower NFS Recommendations
If you bear in mind that you are essentially requesting data from a deduplicated and compressed backup file, it will naturally run slower than normal, and running three concurrently will aggregate the performance across the storage. Could you give some more details of the storage please? How many spindles, what kind of read/write performance do you normally get from the disk, what type of disk is it?
Also, what kind of machines are you starting up? If they are all high I/O application servers you may exeperience these performance problems. Also, what kind of boot delays and start up times are you giving each machine, or are you starting them all concurrently also?
Thanks
Mike
Also, what kind of machines are you starting up? If they are all high I/O application servers you may exeperience these performance problems. Also, what kind of boot delays and start up times are you giving each machine, or are you starting them all concurrently also?
Thanks
Mike
-
- Influencer
- Posts: 15
- Liked: never
- Joined: Apr 27, 2011 3:13 pm
- Contact:
Re: vPower NFS Recommendations
We have a Drobo Pro connected via iSCSI to the Veeam backup server. There are 8 1TB SATA II drives configured in Drobo's proprietary BeyondRAID configuration. During Veeam's backups we are averaging 200MB/s. Currently in the SureBackup I have 3 VMs: Domain Controller, SQL Server, and SharePoint server. Boot delay is default settings, however, application initialization has been modified per technical support from 120 sec to 240 sec. VMs are brought up 1 at a time, with the domain controller being first since it is in an Application Group.
Based on documentation, the NFS datastore is used for write cache. Would it be better to just restore the VM? Would this improve performance since Veeam would build the VM prior to publishing it to ESX?
What type of performance data has others seen during testing or even during development testing?
I guess I'm just looking for recommendations. Things to consider for the NFS datastore such as "at least 100GB of free disk space".
Thanks
Based on documentation, the NFS datastore is used for write cache. Would it be better to just restore the VM? Would this improve performance since Veeam would build the VM prior to publishing it to ESX?
What type of performance data has others seen during testing or even during development testing?
I guess I'm just looking for recommendations. Things to consider for the NFS datastore such as "at least 100GB of free disk space".
Thanks
-
- Chief Product Officer
- Posts: 31630
- Liked: 7128 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: vPower NFS Recommendations
3rd party benchmark for Exchange VM instant recovery through vPower was a part of independent validation review available here:cmoody wrote:What type of performance data has others seen during testing or even during development testing?
http://go.veeam.com/wp-lab-test-vmware- ... greus.html
-
- Influencer
- Posts: 15
- Liked: never
- Joined: Apr 27, 2011 3:13 pm
- Contact:
Re: vPower NFS Recommendations
Thanks for the link!!
I did move the vPower NFS datastore from the SAN to a local drive on the Veeam backup server. This helped reduce the disk I/O latency to less than 600ms.
I did move the vPower NFS datastore from the SAN to a local drive on the Veeam backup server. This helped reduce the disk I/O latency to less than 600ms.
Who is online
Users browsing this forum: Bing [Bot] and 41 guests