Slow Restore Speed - 27MB/s - Tips/Ideas?

VMware specific discussions

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby pgitdept » Mon Feb 24, 2014 10:19 am

dellock6 wrote:Guys, just to try and isolate the problems, let me recap:
PGITDept: Equallogic (problems) + Nimble
Yizhar: Equallogic (problems) + 3Par, (no difference)

@PGITDept, I don't understand in your posts if you also tried your tests against the Nimble storage, and if the vaai on/off affects your restores. I'm going to try this week the same tests against ly HP StoreVirtual VSA cluster, I'll let you know the results. *If* numbers will stay the same regardless of VAAI status, and if the Nimble storage is going to perform the same, it would maybe mean it's an EQL problem more than VAAI libraries.

You both have the latest firmware revision on EQL?

Luca.


Interestingly, we also found the exact same problems with Nimble: 80MB/s restore speed with VAAI Block Zero ON. 200MB/s with it OFF.

We're not on the latest EQL firmware, we're on 6.0.2. Nimble we're on 1.4.7.
pgitdept
Influencer
 
Posts: 17
Liked: 14 times
Joined: Thu Feb 03, 2011 10:29 am
Full Name: PGITDept

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby yizhar » Mon Mar 03, 2014 10:51 am

pgitdept wrote:We're not on the latest EQL firmware, we're on 6.0.2.


Upgrading to latest EQL firmware won't help (I've tested).

I suggest that you open a case with EQL support, and refer to my case which was escalated to level 2, so they can understand that this is not a specific issue.

Also please try to open VMware support case - my customer vmware support has expired and it will take long time for them to renew it.

Yizhar
yizhar
Expert
 
Posts: 179
Liked: 48 times
Joined: Mon Sep 03, 2012 5:28 am
Full Name: Yizhar Hurwitz

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby cmgurley » Wed Jul 02, 2014 4:16 pm

I'm about to create a support case, but have there been any updates or developments on this? I'm experiencing 12-13MB/s restore speeds to VMware (5.5) in contrast to 165MB/s to Hyper-V. Latest version of Veeam B&R Enterprise. Storage source/target is 3PAR V400 (HP StoreServ 7400 equivalent, I believe). I've tested with VAAI on and off and see the same results.

Thanks,
Chris
cmgurley
Lurker
 
Posts: 2
Liked: never
Joined: Wed Jul 02, 2014 4:12 pm
Full Name: Chris Gurley

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby cmgurley » Wed Jul 02, 2014 4:55 pm

As an addendum to my prior post, the official statistics speeds showed 39 and 47MB/s with and without VAAI (no effective difference; same 12-13MB/s during the job). Open case# is 00594580.
cmgurley
Lurker
 
Posts: 2
Liked: never
Joined: Wed Jul 02, 2014 4:12 pm
Full Name: Chris Gurley

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby Gostev » Wed Jul 02, 2014 7:53 pm

Are you specifying hot add proxy for restore?
Gostev
Veeam Software
 
Posts: 21396
Liked: 2350 times
Joined: Sun Jan 01, 2006 1:01 am
Location: Baar, Switzerland

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby Vitaliy S. » Wed Jul 02, 2014 8:42 pm

Also can you please tell us what performance do you have when using datastore browser in vSphere Client to upload any big file to the same datastore you're restoring VMs to?
Vitaliy S.
Veeam Software
 
Posts: 19570
Liked: 1104 times
Joined: Mon Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby chjones » Thu Apr 30, 2015 9:11 pm

Just to comment on this, I am having very similar issues. Restore speeds were constantly around 11-12MB/sec restoring to my HP 3PAR 7400 via SAN, Network or Hot-Add methods. After a lot of testing I found the issue is related to restoring to Thin Provisioned Volumes on the storage. If the volume was thick we'd get around 150MB/sec for restores.

During further testing we found that if we set the ESXi Advanced Setting "VMFS3.EnableBlockDelete" to 1 (Enabled) on the ESXi Host and then ran an unmap command from the host, "esxcli storage vmfs unmap -l (Datastore name) --reclaim-unit=12800", the restore would then achieve the faster speeds of around 150MB/sec.

The issue is that when you delete a VM, storage vmotion, snapshot, etc ... any operation that creates and removes file from a Datastore ... the action of advising the storage the blocks are now free is not sent to the storage array (VMware disabled these automatic unmap operations in 5.0 Update 1). Without the unmap when you restore a VM ESXi believes the blocks on the storage are free but there is a write penalty as you have to wait for the 3PAR to zero the blocks on the fly and then write the restored data. Running the umap removes this performance hit.

One thing I find really interesting is the difference in performance of SAN restore vs Network restore. I've done at least 30-40 restore tests and found the same result each time. Our ESXi Hosts and Physical Veeam Proxies are all connected via 10GbE and have 8Gb Fibre Channel access to the same 3PAR.

When restoring the same VM to the same host and datastore from the same VBK file I see the following:

- Network Restore - 140-150MB/sec
- SAN Restore - 40-50MB/sec

Before using the unmap commands I see a consistent 11-12MB/sec no matter which restore method I use.

I am really surprised the SAN Restore is so much slower. I was really expecting it to be faster or at least close. SAN Backups (we use the Storage Snapshot Integration) are very fast and I can backup at over 300-400MB/sec (a HP StoreOnce NAS Share is our repository and write speeds to this are a bottleneck), but restores are crazy slow.

I did notice something else odd. All of our VMs are THICK EAGER ZEROED, which is best practice for a storage array such as 3PAR that supports ZERO DETECT. However, when a VM is restored it is changed to THICK LAZY ZEROED. I've read the VMware KBs that say any Thick VM that is backed up using will always restore as THICK LAZY. The only way to restore a VM as THICK EAGER is to not use CBT for the backup. This seems crazy as CBT is fantastic for backups.

I am wondering (and not sure how to test this) if the unmap issues for a thin storage volume would be less of an issue if Veeam could restore a VM as THICK EAGER ZEROED. I have a theory this may improve things but not sure how to test it. Veeam creates the VM in the inventory first, takes a snapshot, and then overwrite the base VMDK files. Perhaps if the VM was created as THICK EAGER, which will take a little longer to create but not as long as the slower restores take, this may get around it?
chjones
Enthusiast
 
Posts: 83
Liked: 25 times
Joined: Tue Oct 30, 2012 7:53 pm
Full Name: Chris Jones

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby dellock6 » Thu Apr 30, 2015 10:15 pm

Hi Chris,
writing new blocks on a VMFS datastore has indeed a huge penalty created by metadata updates of the volume itself. You can read more in this post by Corman Hogan from VMware: http://cormachogan.com/2013/07/18/why-i ... s-so-slow/.
I commented in the post, and hotadd seems not to be affected by this problem. In your tests a hotadd restore is missing, have you tried it?

About san restores, they are a great solution if CBT is not corrupted and you can leverage it to inject changed blocks quickly into a vmdk. No idea instead about how to restore a VM directly in a eager zeroed format...
Luca Dell'Oca
EMEA Cloud Architect @ Veeam Software

@dellock6
http://www.virtualtothecore.com
vExpert 2011-2012-2013-2014-2015-2016
Veeam VMCE #1
dellock6
Veeam Software
 
Posts: 5055
Liked: 1334 times
Joined: Sun Jul 26, 2009 3:39 pm
Location: Varese, Italy
Full Name: Luca Dell'Oca

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby chjones » Fri May 01, 2015 3:40 am

Hi Luca,

I did some tests this morning with HotAdd and found that it was indeed faster than a SAN restore. HotAdd was about 110MB/sec, which I expected to see a little slower than network as the HotAdd proxy has to go and grab the VBK over the network and process. The test VBK I've been using for these tests is sitting on the local internal drives of one of the proxies (just so I could take our HP StoreOnce out of the equation as it is usually my bottleneck during any restore).

I'm hoping there is a way to force an eager zeroed restore somehow. What is interesting is if I storage vmotion a VM between two thin provisioned datastores on the 3PAR, both having been in use for nearly 2 years and never having had an unmap cleanup run on them, the migration occurs are around 330MB/sec or more. The migration is moving a thick eager VM between datastores and that move is offloaded to the array since it supports VAAI. If the 3PAR can move the data that fast, it seems crazy that I only see 12MB/sec unless I run an unmap.

I'll check out that link you pasted.

Thanks,

Chris
chjones
Enthusiast
 
Posts: 83
Liked: 25 times
Joined: Tue Oct 30, 2012 7:53 pm
Full Name: Chris Jones

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby SyNtAxx » Fri May 01, 2015 8:49 pm 2 people like this post

I did some additional testing on this subject today. I can restore faster to a 2 drive mirror set in my server blade formatted as vmfs5 than I can to a 1700 drive 3par v800! Twice as fast. So, why do I seemingly not see the penalty on the small disk set that is formatted with the same file system? I also disabled VAAI on the test host, and used a virgin Lun exported to the host as a test. 60-80MB/sec to SAN, 160-225MB/sec to 2 drive mirror set.

-Nick
SyNtAxx
Expert
 
Posts: 127
Liked: 14 times
Joined: Fri Jan 02, 2015 7:12 pm

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby Gostev » Fri May 01, 2015 9:56 pm

I was just going to suggest a similar test, great thinking Nick. I think our support should open a case with HP and VMware on your behalf at this point, as all signs point to some trouble with backup proxy connectivity into the SAN fabric, or may be even SAN configuration itself here...
Gostev
Veeam Software
 
Posts: 21396
Liked: 2350 times
Joined: Sun Jan 01, 2006 1:01 am
Location: Baar, Switzerland

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby SyNtAxx » Mon May 04, 2015 7:03 pm

Gostev wrote:I was just going to suggest a similar test, great thinking Nick. I think our support should open a case with HP and VMware on your behalf at this point, as all signs point to some trouble with backup proxy connectivity into the SAN fabric, or may be even SAN configuration itself here...



Gostov,

I would love some additional help! As for my configuration, my esx servers (blades) are direct connected to the 3par array at 8 x 8gbps, so its a flat SAN, no switching involved. My physical SAN proxy is also connected to our Brocade SAN directors, one hop away from the 3par. I did this to eliminate any potential edge switching issues. I've eliminated all I can from my setup so far.

-Nick
SyNtAxx
Expert
 
Posts: 127
Liked: 14 times
Joined: Fri Jan 02, 2015 7:12 pm

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby jbsengineer » Tue May 05, 2015 1:30 pm 1 person likes this post

dellock6 wrote:Hi Chris,
writing new blocks on a VMFS datastore has indeed a huge penalty created by metadata updates of the volume itself. You can read more in this post by Corman Hogan from VMware: http://cormachogan.com/2013/07/18/why-i ... s-so-slow/.
I commented in the post, and hotadd seems not to be affected by this problem. In your tests a hotadd restore is missing, have you tried it?

About san restores, they are a great solution if CBT is not corrupted and you can leverage it to inject changed blocks quickly into a vmdk. No idea instead about how to restore a VM directly in a eager zeroed format...


For this exact reason above with VMFS we are NOT able to get any better peformance than around 150MB/s on restores. Actually, any operation that is doing "zeroing on the fly" within a VMFS datastores will be limited to around 150MB/s. Even writing for the first time onto SSD.

Has Veeam corrected the "issue" from Veeam 7.0 where when you choose "Restore same as source disk" the disk is restored always as Thick Lazy Zero? If there was an option to restore as Eager Zero Thick, the process of first writing out all the zeros, then filling in the restore data is almost twice as fast than writing zeros on the fly. Of course there is a line of diminishing returns depending how over allocated the disk is vs "real" data with in.

veeam-backup-replication-f2/full-vm-restore-through-a-backup-proxy-is-single-threaded-t22664-15.html
jbsengineer
Influencer
 
Posts: 24
Liked: 3 times
Joined: Tue Nov 10, 2009 2:45 pm

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby chjones » Thu May 07, 2015 1:57 am

jbsengineer wrote:For this exact reason above with VMFS we are NOT able to get any better peformance than around 150MB/s on restores. Actually, any operation that is doing "zeroing on the fly" within a VMFS datastores will be limited to around 150MB/s. Even writing for the first time onto SSD.

Has Veeam corrected the "issue" from Veeam 7.0 where when you choose "Restore same as source disk" the disk is restored always as Thick Lazy Zero? If there was an option to restore as Eager Zero Thick, the process of first writing out all the zeros, then filling in the restore data is almost twice as fast than writing zeros on the fly. Of course there is a line of diminishing returns depending how over allocated the disk is vs "real" data with in.

veeam-backup-replication-f2/full-vm-restore-through-a-backup-proxy-is-single-threaded-t22664-15.html


I agree that being able to perform a THICK EAGER ZERO restore may be a potential workaround for thin provisioned datastores. We can deploy a 1TB THICK EAGER disk to our 3PAR and it will complete within 5 minutes. Such an overhead to achieve 10x faster restores is perfectly acceptable to me.
chjones
Enthusiast
 
Posts: 83
Liked: 25 times
Joined: Tue Oct 30, 2012 7:53 pm
Full Name: Chris Jones

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Veeam Logoby darryl » Tue May 12, 2015 3:37 pm

chjones wrote:Just to comment on this, I am having very similar issues. Restore speeds were constantly around 11-12MB/sec restoring to my HP 3PAR 7400 via SAN, Network or Hot-Add methods. After a lot of testing I found the issue is related to restoring to Thin Provisioned Volumes on the storage. If the volume was thick we'd get around 150MB/sec for restores.

During further testing we found that if we set the ESXi Advanced Setting "VMFS3.EnableBlockDelete" to 1 (Enabled) on the ESXi Host and then ran an unmap command from the host, "esxcli storage vmfs unmap -l (Datastore name) --reclaim-unit=12800", the restore would then achieve the faster speeds of around 150MB/sec.


Unfortunately we also have this issue with our FC-connected 3Par 7400, running 3.2.1 (MU1).

This was a new environment for us. Initial backup and restore testing was fine. We also noticed that NBD was around twice as fast at restores as SAN mode.

After a couple of weeks during which we loaded up the environment, I re-ran restore testing and started getting 12 MB/sec restore rates all over the place. These are to thin provisioned VVs. Restores to a newly provisioned VV were fine, 180+MB/sec.

Running the unmap on a datastore before running a Veeam restore results in 180+MB/sec restores, vs 12 MB/sec.

We have a workaround, doing the unmap before every restore, but this really isn't a great long term solution.
darryl
Influencer
 
Posts: 21
Liked: 3 times
Joined: Wed May 11, 2011 1:37 pm

PreviousNext

Return to VMware vSphere



Who is online

Users browsing this forum: Majestic-12 [Bot] and 11 guests