-
- Novice
- Posts: 4
- Liked: never
- Joined: Apr 28, 2012 1:18 am
- Full Name: Moey
- Contact:
Slow Restore Speed - 27MB/s - Tips/Ideas?
What is the average restore speed that everyone is noticing?
We recently setup a few servers out at our Colo, and was doing some testing recovering VMs from our production site. The backup files are located on a 8 disk array connected to the Veeam VM (Win 7 x64 via MS iSCSI initatior, NTFS, 10GbE). We are restoring to a 16 disk array (iSCSI connection to vSphere, VMFS, 10GbE). Our recovery times seem to get stuck at 27MB/s.
Currently the host that the Veeam VM is running on is not on 10GbE, but I figured we could atleast saturate a 1gb link with a restore. Performance testing (iometer) has been ran on both storage devices and can easily saturate a 1gb pipe.
Specs for the Veeam VM are 4 vCPU and 8gb memory. No cpu/memory contention on host that is running the Veeam VM. vCPU and memory usage are around 50% during a restore.
Anyone have any tips/ideas for why I can't get these restores moving faster? Recovering larger VMs (1tb+) is becoming a pain at this rate.
We recently setup a few servers out at our Colo, and was doing some testing recovering VMs from our production site. The backup files are located on a 8 disk array connected to the Veeam VM (Win 7 x64 via MS iSCSI initatior, NTFS, 10GbE). We are restoring to a 16 disk array (iSCSI connection to vSphere, VMFS, 10GbE). Our recovery times seem to get stuck at 27MB/s.
Currently the host that the Veeam VM is running on is not on 10GbE, but I figured we could atleast saturate a 1gb link with a restore. Performance testing (iometer) has been ran on both storage devices and can easily saturate a 1gb pipe.
Specs for the Veeam VM are 4 vCPU and 8gb memory. No cpu/memory contention on host that is running the Veeam VM. vCPU and memory usage are around 50% during a restore.
Anyone have any tips/ideas for why I can't get these restores moving faster? Recovering larger VMs (1tb+) is becoming a pain at this rate.
-
- Novice
- Posts: 4
- Liked: never
- Joined: Apr 28, 2012 1:18 am
- Full Name: Moey
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Adding some more details:
My Setup:
- Veeam Proxy on a VM - 4 vCPU, 8gb mem, Veeam 6.1
- Veeam Backup Repository - 8 disk raid 5, 10GbE, NTFS, connected to Veeam Proxy via MS iSCSI initiator
- ESXi 4.1 U2
- vSphere datastore - 16 disk raid 10, 10GbE, VMFS, VMware iSCSI initiator
My Setup:
- Veeam Proxy on a VM - 4 vCPU, 8gb mem, Veeam 6.1
- Veeam Backup Repository - 8 disk raid 5, 10GbE, NTFS, connected to Veeam Proxy via MS iSCSI initiator
- ESXi 4.1 U2
- vSphere datastore - 16 disk raid 10, 10GbE, VMFS, VMware iSCSI initiator
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
For fast full VM restores, we recommend using hotadd restores via virtual backup proxy. Make sure that proxy capable of hotadd is picked up during restore (information available in the restore session log).
-
- VP, Product Management
- Posts: 27371
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
... or use Instant VM Recovery and Migration jobs to move VM data back to production storage.
-
- Novice
- Posts: 4
- Liked: never
- Joined: Apr 28, 2012 1:18 am
- Full Name: Moey
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Can you expand a little more on this? The virtual backup proxy does have access to all the datastores that the VMs are being restored to.foggy wrote:For fast full VM restores, we recommend using hotadd restores via virtual backup proxy. Make sure that proxy capable of hotadd is picked up during restore (information available in the restore session log).
After doing some digging yesterday, I came across a post mentioning this method. We are not licensed for SvMotion for our DR site, so it would have to be an offline migration.Vitaliy S. wrote:... or use Instant VM Recovery and Migration jobs to move VM data back to production storage.
Any reason why doing an instant restore then storage migration would be quicker than a full Veeam restore?
-
- VP, Product Management
- Posts: 27371
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
1. If the backup proxy can access datastores you're restoring VMs to, it will automatically attempt to restore selected VMs via ESXi I/O stack directly (Hot Add mode). This approach is a way faster method compared to VM restores that are performed over the network stack.
2. Yes, but using Instant VM Recovery with a Quick Migration functinality (even without Storage vMotion licensed) will allow you to minimize the "non-working" hours for this VM. Please take a look at our User Guide (page 54) for additional details of this process.
Hope this helps!
2. Yes, but using Instant VM Recovery with a Quick Migration functinality (even without Storage vMotion licensed) will allow you to minimize the "non-working" hours for this VM. Please take a look at our User Guide (page 54) for additional details of this process.
Hope this helps!
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Open the session log (right-click the restore session and select the Log tab) and find the "Using target proxy..." record. Check whether it has the [hotadd] label after the name of the proxy. That means that the hotadd transport mode is effectively used to populate VM data on the target host.Vitaliy S. wrote:1. If the backup proxy can access datastores you're restoring VMs to, it will automatically attempt to restore selected VMs via ESXi I/O stack directly (Hot Add mode). This approach is a way faster method compared to VM restores that are performed over the network stack.
-
- Novice
- Posts: 4
- Liked: never
- Joined: Apr 28, 2012 1:18 am
- Full Name: Moey
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
I just checked our restore jobs, and it indeed is using Hot Add Mode.
I will be doing some testing of Instant Recovery with Quick Migration.
Thanks for all the responses. I'll post some updates after I have tested.
I will be doing some testing of Instant Recovery with Quick Migration.
Thanks for all the responses. I'll post some updates after I have tested.
-
- Novice
- Posts: 8
- Liked: never
- Joined: Jul 03, 2012 11:56 pm
- Full Name: Trent Lane
- Contact:
Restoring. Faster alternative to NBDSSL?
[merged]
Hi,
We're restoring a couple of servers from backup.
The proxy is restoring using NBD SSL.
Is there any method faster for restoring VM's?
It backs up using SAN. So I was just curious.
Trent
Hi,
We're restoring a couple of servers from backup.
The proxy is restoring using NBD SSL.
Is there any method faster for restoring VM's?
It backs up using SAN. So I was just curious.
Trent
-
- Novice
- Posts: 8
- Liked: never
- Joined: Jul 03, 2012 11:56 pm
- Full Name: Trent Lane
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Sorry,
I'm not hijacking your thread. My post was merged with this.
I just wanted to clarify my configuration.
We have 4 blades
3 are ESXi hosts
The 4th runs Server 2k8r2. It runs Veeam as a stand alone server (it's the backup proxy)
The 2k8 blade has access to the LUN's used by the 3 hosts for VMs so is able to access backup across via SAN.
Does my backup proxy need to be a VM for Hot Add to work?
I'm not hijacking your thread. My post was merged with this.
I just wanted to clarify my configuration.
We have 4 blades
3 are ESXi hosts
The 4th runs Server 2k8r2. It runs Veeam as a stand alone server (it's the backup proxy)
The 2k8 blade has access to the LUN's used by the 3 hosts for VMs so is able to access backup across via SAN.
Does my backup proxy need to be a VM for Hot Add to work?
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
No, you can keep your proxy, but also create additional virtual proxy just for the purposes of hot add restores. Kindly please review the sticky FAQ topic for all information regarding hot add mode and its requirements. Thanks!
-
- Novice
- Posts: 8
- Liked: never
- Joined: Jul 03, 2012 11:56 pm
- Full Name: Trent Lane
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Thanks.
I did read the FAQ just needed that extra bit of info.
Got it going, Proxy is hotadd
Thanks again.
I did read the FAQ just needed that extra bit of info.
Got it going, Proxy is hotadd
Thanks again.
-
- Expert
- Posts: 179
- Liked: 8 times
- Joined: Jul 02, 2013 7:48 pm
- Full Name: Koen Teugels
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Still slow here after adding a virtual proxy
-
- Service Provider
- Posts: 182
- Liked: 48 times
- Joined: Sep 03, 2012 5:28 am
- Full Name: Yizhar Hurwitz
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Hi.
I suggest that you also try disabling VMware vasa block zero function on the esxi host,
then test if it makes any difference:
VMware KB Disabling the VAAI functionality in ESXi-ESX
http://kb.vmware.com/selfservice/micros ... Id=1033665
See step 7 below:
7.Change the DataMover.HardwareAcceleratedInit setting to 0.
To disable VAAI using the vSphere Client:
1.Open the VMware vSphere Client.
2.In the Inventory pane, select the ESXi/ESX host.
3.Click the Configuration tab.
4.Under Software, click Advanced Settings.
5.Click DataMover.
6.Change the DataMover.HardwareAcceleratedMove setting to 0.
7.Change the DataMover.HardwareAcceleratedInit setting to 0.
8.Click VMFS3.
9.Change the VMFS3.HardwareAcceleratedLocking setting to 0.
10.Click OK to save your changes.
11.Repeat this process for the all ESXi/ESX hosts connected to the storage.
Please tell us if it changes anything for good or bad.
Yizhar
I suggest that you also try disabling VMware vasa block zero function on the esxi host,
then test if it makes any difference:
VMware KB Disabling the VAAI functionality in ESXi-ESX
http://kb.vmware.com/selfservice/micros ... Id=1033665
See step 7 below:
7.Change the DataMover.HardwareAcceleratedInit setting to 0.
To disable VAAI using the vSphere Client:
1.Open the VMware vSphere Client.
2.In the Inventory pane, select the ESXi/ESX host.
3.Click the Configuration tab.
4.Under Software, click Advanced Settings.
5.Click DataMover.
6.Change the DataMover.HardwareAcceleratedMove setting to 0.
7.Change the DataMover.HardwareAcceleratedInit setting to 0.
8.Click VMFS3.
9.Change the VMFS3.HardwareAcceleratedLocking setting to 0.
10.Click OK to save your changes.
11.Repeat this process for the all ESXi/ESX hosts connected to the storage.
Please tell us if it changes anything for good or bad.
Yizhar
-
- Service Provider
- Posts: 182
- Liked: 48 times
- Joined: Sep 03, 2012 5:28 am
- Full Name: Yizhar Hurwitz
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
BTW,
When changing the DataMover.HardwareAcceleratedInit setting,
no restart or further action is needed - so you can easily and quickly experience results.
Yizhar
When changing the DataMover.HardwareAcceleratedInit setting,
no restart or further action is needed - so you can easily and quickly experience results.
Yizhar
-
- Service Provider
- Posts: 182
- Liked: 48 times
- Joined: Sep 03, 2012 5:28 am
- Full Name: Yizhar Hurwitz
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
More info:
I also have a 3par (but smaller machine - 3par 7200 with 12 sata disks in raid1 used for the test),
and I did try to restore a small test VM with about 50gb disk (most part of it empty as this is a test machine).
I got about 70 MB/s during restore, and about 8 minutes to restore 50gb vmdk.
I did try with and without block zero and got same speed, however I still thing you should test it.
This makes me think that one of the differences between my small test and your production VM, is the size of data - VBK size on repository and/or VMDK size on datastore.
Have you checked memory and cpu resources on the backup proxy during restore?
Can you try restoring a smaller VM for the test and compare results?
Yizhar
I also have a 3par (but smaller machine - 3par 7200 with 12 sata disks in raid1 used for the test),
and I did try to restore a small test VM with about 50gb disk (most part of it empty as this is a test machine).
I got about 70 MB/s during restore, and about 8 minutes to restore 50gb vmdk.
I did try with and without block zero and got same speed, however I still thing you should test it.
This makes me think that one of the differences between my small test and your production VM, is the size of data - VBK size on repository and/or VMDK size on datastore.
Have you checked memory and cpu resources on the backup proxy during restore?
Can you try restoring a smaller VM for the test and compare results?
Yizhar
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
I do agree that this would be interesting to test.
-
- Expert
- Posts: 179
- Liked: 8 times
- Joined: Jul 02, 2013 7:48 pm
- Full Name: Koen Teugels
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
but veeam calculates his average on the 50 gb not on the 20 gb he restored realy from the 50 gb
So in my case veeam restore 2 tb in 2 minutes but the disk was empty, so no real way to mesure it.
so you also have 60 GB an hour, if you hade a bigger vm that was fulled it ould go slow also.
8 cpu and 32 gb of ram, so no issue there
So in my case veeam restore 2 tb in 2 minutes but the disk was empty, so no real way to mesure it.
so you also have 60 GB an hour, if you hade a bigger vm that was fulled it ould go slow also.
8 cpu and 32 gb of ram, so no issue there
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
The idea would be to test something smaller to see if it's something to do with the size of the VM being restored. For example, I can restore a 100GB VM in my home lab at about 60MB/s. This is 100GB of actual data, not a "fake" speed from restoring blank space. I have recently tested restores on some Netapp hardware in a vendor lab environment and performance was around 110MB/s for restores (real transfer speed). The idea would be to see if there was something impacting restores of very large VMs (perhaps an agent memory issue or something). You can easily measure the restore performance of the actual data by simply watching the bandwidth utilization, or better yet the datastore utilization in vCenter.
-
- Influencer
- Posts: 17
- Liked: 14 times
- Joined: Feb 03, 2011 10:29 am
- Full Name: PGITDept
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
I just thought I'd post some of our findings regarding our slow restores:
We're running ESX 5.5 and Veeam 7R2. Our storage platforms are DELL EQL (4 arrays: 15k RAID10, 10k RAID10 and 2x7.2k RAID6) and a Nimble CS460G. Now we were testing restores from tape on a fairly smallish server here (1.3TB used of 4TB). We were restoring to a new datastore on the EQL SATA storage pool made up of two RAID6 arrays. We were getting about 15MB/s, which is pretty horrendous and it took 29hours to restore. Now that pool is fairly heavily used, but should be giving better performance than that. So we built a smaller VM of around 150GB and tested that - the results were pretty much the same.
We figured it might the storage pool and so we moved it to our 15k RAID10 storage pool. That got to the dizzy heights of 42MB/s. So, we're starting to panic now as it would take us over a month to restore from a full failure - not good. We decided to restore to the Nimble and got the dizzy heights of 80MB/s, not brilliant, but more acceptable. So, we started doing some deeper tests and noticed that our write rates to some VM's were also fairly poor, although it was hard to try and pin it down.
Anyway, I found this thread and started working through it - Veeam restores were consistently poor so we could use that as a measuring stick. We tried HotAdd and other mechanisms and found it to make no difference and were fairly convinced it was more global an issue than Veeam.
So, we got to the post by Yizhar about disabling VAAI functionality on the host. We run vSphere 5.5 Standard edition, so we don't have VAAI support even though our Nimble and EQL support it to differing degrees.
BOOM - our restore to the SATA pool went up to 90MB/s+, the 15K pool around 120MB/s and the Nimble clocked a restore of nearly 200MB/s which is amazing considering it's a large Sequential Write - the workload the Nimble hates chewing on the most.
So, we're still testing this on a spare host, but it appears that disabling VAAI has made a considerable difference to all our systems including restoration operations from Veeam. It looks like ESX has VAAI enabled by default, although I'm not sure what affect it has if you're not licensed to use it. Surely not licensed means disabled?
Anyway, it's made a big difference to us and we'll continue testing. I just wanted to say thanks for the post Yizhar!
We're running ESX 5.5 and Veeam 7R2. Our storage platforms are DELL EQL (4 arrays: 15k RAID10, 10k RAID10 and 2x7.2k RAID6) and a Nimble CS460G. Now we were testing restores from tape on a fairly smallish server here (1.3TB used of 4TB). We were restoring to a new datastore on the EQL SATA storage pool made up of two RAID6 arrays. We were getting about 15MB/s, which is pretty horrendous and it took 29hours to restore. Now that pool is fairly heavily used, but should be giving better performance than that. So we built a smaller VM of around 150GB and tested that - the results were pretty much the same.
We figured it might the storage pool and so we moved it to our 15k RAID10 storage pool. That got to the dizzy heights of 42MB/s. So, we're starting to panic now as it would take us over a month to restore from a full failure - not good. We decided to restore to the Nimble and got the dizzy heights of 80MB/s, not brilliant, but more acceptable. So, we started doing some deeper tests and noticed that our write rates to some VM's were also fairly poor, although it was hard to try and pin it down.
Anyway, I found this thread and started working through it - Veeam restores were consistently poor so we could use that as a measuring stick. We tried HotAdd and other mechanisms and found it to make no difference and were fairly convinced it was more global an issue than Veeam.
So, we got to the post by Yizhar about disabling VAAI functionality on the host. We run vSphere 5.5 Standard edition, so we don't have VAAI support even though our Nimble and EQL support it to differing degrees.
BOOM - our restore to the SATA pool went up to 90MB/s+, the 15K pool around 120MB/s and the Nimble clocked a restore of nearly 200MB/s which is amazing considering it's a large Sequential Write - the workload the Nimble hates chewing on the most.
So, we're still testing this on a spare host, but it appears that disabling VAAI has made a considerable difference to all our systems including restoration operations from Veeam. It looks like ESX has VAAI enabled by default, although I'm not sure what affect it has if you're not licensed to use it. Surely not licensed means disabled?
Anyway, it's made a big difference to us and we'll continue testing. I just wanted to say thanks for the post Yizhar!
-
- Expert
- Posts: 201
- Liked: 45 times
- Joined: Dec 22, 2009 9:00 pm
- Full Name: Stephen Frost
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Really interesting discussion guys!
Can someone perhaps elaborate on what other operations (i.e. other than Veeam Restores) are likely to benefit from this change? Any possible net negatives I should consider?
Can someone perhaps elaborate on what other operations (i.e. other than Veeam Restores) are likely to benefit from this change? Any possible net negatives I should consider?
-
- Expert
- Posts: 201
- Liked: 45 times
- Joined: Dec 22, 2009 9:00 pm
- Full Name: Stephen Frost
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Digging further, I found this article:
http://kb.vmware.com/selfservice/micros ... Id=1021976
Ran this command: esxcli storage core device vaai status get
Returns me a display like this for each LUN on my Dell MD3200:
naa.6842b2b0004665ba000003964d1101b2
VAAI Plugin Name:
ATS Status: unsupported
Clone Status: unsupported
Zero Status: supported
Delete Status: unsupported
http://kb.vmware.com/selfservice/micros ... Id=1021976
Ran this command: esxcli storage core device vaai status get
Returns me a display like this for each LUN on my Dell MD3200:
naa.6842b2b0004665ba000003964d1101b2
VAAI Plugin Name:
ATS Status: unsupported
Clone Status: unsupported
Zero Status: supported
Delete Status: unsupported
-
- Influencer
- Posts: 17
- Liked: 14 times
- Joined: Feb 03, 2011 10:29 am
- Full Name: PGITDept
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
So we're still doing testing but we have found that it's definitely the Block Zero setting (DataMover.HardwareAcceleratedInit) that is affecting the restore speed. On the 15K R10 shelf yesterday we started a restore with it turned on and got 45MB/s, during the restore we switched it off and it moved to about 150MB/s before turning it back on again and watching it drop back to ~42MB/s.
We're still doing tests and will be doing more on Monday. Anything of interest, I'll post.
We're still doing tests and will be doing more on Monday. Anything of interest, I'll post.
-
- Influencer
- Posts: 16
- Liked: 6 times
- Joined: Feb 26, 2013 3:48 pm
- Full Name: Ryan
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
I can confirm that at least parts of the VAAI instruction set are used regardless of the ESXi license level. (I had a recent support case with VMware that involved disabling as a troubleshooting step, even though we have standard)
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Guys, these are really interesting findings from you, thanks for sharing them.
This definitely needs further investigations and testings to address the situation, and confirm it's a constant behaviour. Please keep updating this thread as you go on in your tests. Thanks!
Luca.
This definitely needs further investigations and testings to address the situation, and confirm it's a constant behaviour. Please keep updating this thread as you go on in your tests. Thanks!
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Influencer
- Posts: 17
- Liked: 14 times
- Joined: Feb 03, 2011 10:29 am
- Full Name: PGITDept
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Okay, so we built a new 5.5 host and tested it on a eval license to give us Enterprise functionality with full VAAI support. It performed exactly the same as our testing with Standard: Poor restore speeds until we turn off the the Block Zero setting (DataMover.HardwareAcceleratedInit) and then it's between 3-4 times faster. I'm still unable to ascertain if I'm getting a performance hit elsewhere for disabling this.
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Hi,
as the name implies, the "block zero" primitive is involved when vSphere needs to write zeroes in the storage. It's also called write same. Possible scenarios where this library is called are the creation of thick disks. And if you think, sounds correct, a restore of a VMDK implies zeroing out the disk regions where the VMDK itself is going to be restored.
This is for sure something I'd like to test in my lab once I finished to upgrade to 5.5
PS: block zero is NOT involved in cloning, the library in this case is HardwareAcceleratedMove, and the VAAI type is "Clone Blocks/Full Copy/XCOPY"
Luca.
as the name implies, the "block zero" primitive is involved when vSphere needs to write zeroes in the storage. It's also called write same. Possible scenarios where this library is called are the creation of thick disks. And if you think, sounds correct, a restore of a VMDK implies zeroing out the disk regions where the VMDK itself is going to be restored.
This is for sure something I'd like to test in my lab once I finished to upgrade to 5.5
PS: block zero is NOT involved in cloning, the library in this case is HardwareAcceleratedMove, and the VAAI type is "Clone Blocks/Full Copy/XCOPY"
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Block zeroing/WRITE_SAME is used with thin disk as new segments are allocated, which of course will happen a lot during a full VM restore of the thin provisioned disk. If the storage supports the call, in theory this should be faster, certainly in cases that are bandwidth constrained as otherwise the ESXi host actually writes zeros to the newly allocated segment prior to writing the actual data (you can see this extra traffic by watching the iSCSI network during a restore, it will generally be 2x the performance of the restore). Perhaps in cases where the bandwidth is not the bottleneck the added latency of waiting for the call to be returned actually makes the process slower as I believe it is a synchronous call that much be acknowledge before sending the actual data. It would be interesting to see the esxtop output of the ZERO parameters from a box that supports zeroing with this feature turned on and off.
-
- Service Provider
- Posts: 182
- Liked: 48 times
- Joined: Sep 03, 2012 5:28 am
- Full Name: Yizhar Hurwitz
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Hi.
I would like to add the following:
* I have opened a case with Dell Equallogic about the block zero performance problem with VMware.
Here is the case number:
SR# 877759965
It escalated to Level 2 support, but strangely they were unable to reproduce problem at their labs.
So if you can open another case and refer to mine, it might help them (eqaullogic support) better understand the scale of the problem.
* To whoever experience similar problems, please open a VMware case in addition to storage vendor.
* The problems are not directly related to Veeam restore, but to any operation that needs to write data to not yet allocated VMFS disk space. This triggers the block zero VAAI primitive, but in small increments rather then bulk, and this is probably the cause for performance problem.
For example:
If I create a THICK EAGER ZERO VMDK - no problem it will block zero in seconds.
However if I create a THICK LAZY ZERO, or THIN VMDK, and write data to it (example = full format from within guest OS, or simply copy large file into it), this will cause zero operations in small increments and a noticeable performance hit.
I have noticed this with Dell Equallogic array, but it might also affect other vendors and models.
Yizhar
I would like to add the following:
* I have opened a case with Dell Equallogic about the block zero performance problem with VMware.
Here is the case number:
SR# 877759965
It escalated to Level 2 support, but strangely they were unable to reproduce problem at their labs.
So if you can open another case and refer to mine, it might help them (eqaullogic support) better understand the scale of the problem.
* To whoever experience similar problems, please open a VMware case in addition to storage vendor.
* The problems are not directly related to Veeam restore, but to any operation that needs to write data to not yet allocated VMFS disk space. This triggers the block zero VAAI primitive, but in small increments rather then bulk, and this is probably the cause for performance problem.
For example:
If I create a THICK EAGER ZERO VMDK - no problem it will block zero in seconds.
However if I create a THICK LAZY ZERO, or THIN VMDK, and write data to it (example = full format from within guest OS, or simply copy large file into it), this will cause zero operations in small increments and a noticeable performance hit.
I have noticed this with Dell Equallogic array, but it might also affect other vendors and models.
Yizhar
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Slow Restore Speed - 27MB/s - Tips/Ideas?
Guys, just to try and isolate the problems, let me recap:
PGITDept: Equallogic (problems) + Nimble
Yizhar: Equallogic (problems) + 3Par, (no difference)
@PGITDept, I don't understand in your posts if you also tried your tests against the Nimble storage, and if the vaai on/off affects your restores. I'm going to try this week the same tests against ly HP StoreVirtual VSA cluster, I'll let you know the results. *If* numbers will stay the same regardless of VAAI status, and if the Nimble storage is going to perform the same, it would maybe mean it's an EQL problem more than VAAI libraries.
You both have the latest firmware revision on EQL?
Luca.
PGITDept: Equallogic (problems) + Nimble
Yizhar: Equallogic (problems) + 3Par, (no difference)
@PGITDept, I don't understand in your posts if you also tried your tests against the Nimble storage, and if the vaai on/off affects your restores. I'm going to try this week the same tests against ly HP StoreVirtual VSA cluster, I'll let you know the results. *If* numbers will stay the same regardless of VAAI status, and if the Nimble storage is going to perform the same, it would maybe mean it's an EQL problem more than VAAI libraries.
You both have the latest firmware revision on EQL?
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Who is online
Users browsing this forum: No registered users and 72 guests