Discussions specific to the VMware vSphere hypervisor
pgitdept
Influencer
Posts: 17
Liked: 14 times
Joined: Feb 03, 2011 10:29 am
Full Name: PGITDept
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by pgitdept » Feb 24, 2014 10:19 am

dellock6 wrote:Guys, just to try and isolate the problems, let me recap:
PGITDept: Equallogic (problems) + Nimble
Yizhar: Equallogic (problems) + 3Par, (no difference)

@PGITDept, I don't understand in your posts if you also tried your tests against the Nimble storage, and if the vaai on/off affects your restores. I'm going to try this week the same tests against ly HP StoreVirtual VSA cluster, I'll let you know the results. *If* numbers will stay the same regardless of VAAI status, and if the Nimble storage is going to perform the same, it would maybe mean it's an EQL problem more than VAAI libraries.

You both have the latest firmware revision on EQL?

Luca.
Interestingly, we also found the exact same problems with Nimble: 80MB/s restore speed with VAAI Block Zero ON. 200MB/s with it OFF.

We're not on the latest EQL firmware, we're on 6.0.2. Nimble we're on 1.4.7.

yizhar
Service Provider
Posts: 179
Liked: 48 times
Joined: Sep 03, 2012 5:28 am
Full Name: Yizhar Hurwitz
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by yizhar » Mar 03, 2014 10:51 am

pgitdept wrote: We're not on the latest EQL firmware, we're on 6.0.2.
Upgrading to latest EQL firmware won't help (I've tested).

I suggest that you open a case with EQL support, and refer to my case which was escalated to level 2, so they can understand that this is not a specific issue.

Also please try to open VMware support case - my customer vmware support has expired and it will take long time for them to renew it.

Yizhar

cmgurley
Lurker
Posts: 2
Liked: never
Joined: Jul 02, 2014 4:12 pm
Full Name: Chris Gurley
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by cmgurley » Jul 02, 2014 4:16 pm

I'm about to create a support case, but have there been any updates or developments on this? I'm experiencing 12-13MB/s restore speeds to VMware (5.5) in contrast to 165MB/s to Hyper-V. Latest version of Veeam B&R Enterprise. Storage source/target is 3PAR V400 (HP StoreServ 7400 equivalent, I believe). I've tested with VAAI on and off and see the same results.

Thanks,
Chris

cmgurley
Lurker
Posts: 2
Liked: never
Joined: Jul 02, 2014 4:12 pm
Full Name: Chris Gurley
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by cmgurley » Jul 02, 2014 4:55 pm

As an addendum to my prior post, the official statistics speeds showed 39 and 47MB/s with and without VAAI (no effective difference; same 12-13MB/s during the job). Open case# is 00594580.

Gostev
SVP, Product Management
Posts: 24174
Liked: 3301 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by Gostev » Jul 02, 2014 7:53 pm

Are you specifying hot add proxy for restore?

Vitaliy S.
Product Manager
Posts: 22527
Liked: 1475 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by Vitaliy S. » Jul 02, 2014 8:42 pm

Also can you please tell us what performance do you have when using datastore browser in vSphere Client to upload any big file to the same datastore you're restoring VMs to?

chjones
Expert
Posts: 104
Liked: 27 times
Joined: Oct 30, 2012 7:53 pm
Full Name: Chris Jones
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by chjones » Apr 30, 2015 9:11 pm

Just to comment on this, I am having very similar issues. Restore speeds were constantly around 11-12MB/sec restoring to my HP 3PAR 7400 via SAN, Network or Hot-Add methods. After a lot of testing I found the issue is related to restoring to Thin Provisioned Volumes on the storage. If the volume was thick we'd get around 150MB/sec for restores.

During further testing we found that if we set the ESXi Advanced Setting "VMFS3.EnableBlockDelete" to 1 (Enabled) on the ESXi Host and then ran an unmap command from the host, "esxcli storage vmfs unmap -l (Datastore name) --reclaim-unit=12800", the restore would then achieve the faster speeds of around 150MB/sec.

The issue is that when you delete a VM, storage vmotion, snapshot, etc ... any operation that creates and removes file from a Datastore ... the action of advising the storage the blocks are now free is not sent to the storage array (VMware disabled these automatic unmap operations in 5.0 Update 1). Without the unmap when you restore a VM ESXi believes the blocks on the storage are free but there is a write penalty as you have to wait for the 3PAR to zero the blocks on the fly and then write the restored data. Running the umap removes this performance hit.

One thing I find really interesting is the difference in performance of SAN restore vs Network restore. I've done at least 30-40 restore tests and found the same result each time. Our ESXi Hosts and Physical Veeam Proxies are all connected via 10GbE and have 8Gb Fibre Channel access to the same 3PAR.

When restoring the same VM to the same host and datastore from the same VBK file I see the following:

- Network Restore - 140-150MB/sec
- SAN Restore - 40-50MB/sec

Before using the unmap commands I see a consistent 11-12MB/sec no matter which restore method I use.

I am really surprised the SAN Restore is so much slower. I was really expecting it to be faster or at least close. SAN Backups (we use the Storage Snapshot Integration) are very fast and I can backup at over 300-400MB/sec (a HP StoreOnce NAS Share is our repository and write speeds to this are a bottleneck), but restores are crazy slow.

I did notice something else odd. All of our VMs are THICK EAGER ZEROED, which is best practice for a storage array such as 3PAR that supports ZERO DETECT. However, when a VM is restored it is changed to THICK LAZY ZEROED. I've read the VMware KBs that say any Thick VM that is backed up using will always restore as THICK LAZY. The only way to restore a VM as THICK EAGER is to not use CBT for the backup. This seems crazy as CBT is fantastic for backups.

I am wondering (and not sure how to test this) if the unmap issues for a thin storage volume would be less of an issue if Veeam could restore a VM as THICK EAGER ZEROED. I have a theory this may improve things but not sure how to test it. Veeam creates the VM in the inventory first, takes a snapshot, and then overwrite the base VMDK files. Perhaps if the VM was created as THICK EAGER, which will take a little longer to create but not as long as the slower restores take, this may get around it?

dellock6
Veeam Software
Posts: 5653
Liked: 1589 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by dellock6 » Apr 30, 2015 10:15 pm

Hi Chris,
writing new blocks on a VMFS datastore has indeed a huge penalty created by metadata updates of the volume itself. You can read more in this post by Corman Hogan from VMware: http://cormachogan.com/2013/07/18/why-i ... s-so-slow/.
I commented in the post, and hotadd seems not to be affected by this problem. In your tests a hotadd restore is missing, have you tried it?

About san restores, they are a great solution if CBT is not corrupted and you can leverage it to inject changed blocks quickly into a vmdk. No idea instead about how to restore a VM directly in a eager zeroed format...
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2019
Veeam VMCE #1

chjones
Expert
Posts: 104
Liked: 27 times
Joined: Oct 30, 2012 7:53 pm
Full Name: Chris Jones
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by chjones » May 01, 2015 3:40 am

Hi Luca,

I did some tests this morning with HotAdd and found that it was indeed faster than a SAN restore. HotAdd was about 110MB/sec, which I expected to see a little slower than network as the HotAdd proxy has to go and grab the VBK over the network and process. The test VBK I've been using for these tests is sitting on the local internal drives of one of the proxies (just so I could take our HP StoreOnce out of the equation as it is usually my bottleneck during any restore).

I'm hoping there is a way to force an eager zeroed restore somehow. What is interesting is if I storage vmotion a VM between two thin provisioned datastores on the 3PAR, both having been in use for nearly 2 years and never having had an unmap cleanup run on them, the migration occurs are around 330MB/sec or more. The migration is moving a thick eager VM between datastores and that move is offloaded to the array since it supports VAAI. If the 3PAR can move the data that fast, it seems crazy that I only see 12MB/sec unless I run an unmap.

I'll check out that link you pasted.

Thanks,

Chris

SyNtAxx
Expert
Posts: 148
Liked: 15 times
Joined: Jan 02, 2015 7:12 pm
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by SyNtAxx » May 01, 2015 8:49 pm 2 people like this post

I did some additional testing on this subject today. I can restore faster to a 2 drive mirror set in my server blade formatted as vmfs5 than I can to a 1700 drive 3par v800! Twice as fast. So, why do I seemingly not see the penalty on the small disk set that is formatted with the same file system? I also disabled VAAI on the test host, and used a virgin Lun exported to the host as a test. 60-80MB/sec to SAN, 160-225MB/sec to 2 drive mirror set.

-Nick

Gostev
SVP, Product Management
Posts: 24174
Liked: 3301 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by Gostev » May 01, 2015 9:56 pm

I was just going to suggest a similar test, great thinking Nick. I think our support should open a case with HP and VMware on your behalf at this point, as all signs point to some trouble with backup proxy connectivity into the SAN fabric, or may be even SAN configuration itself here...

SyNtAxx
Expert
Posts: 148
Liked: 15 times
Joined: Jan 02, 2015 7:12 pm
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by SyNtAxx » May 04, 2015 7:03 pm

Gostev wrote:I was just going to suggest a similar test, great thinking Nick. I think our support should open a case with HP and VMware on your behalf at this point, as all signs point to some trouble with backup proxy connectivity into the SAN fabric, or may be even SAN configuration itself here...

Gostov,

I would love some additional help! As for my configuration, my esx servers (blades) are direct connected to the 3par array at 8 x 8gbps, so its a flat SAN, no switching involved. My physical SAN proxy is also connected to our Brocade SAN directors, one hop away from the 3par. I did this to eliminate any potential edge switching issues. I've eliminated all I can from my setup so far.

-Nick

jbsengineer
Enthusiast
Posts: 25
Liked: 3 times
Joined: Nov 10, 2009 2:45 pm
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by jbsengineer » May 05, 2015 1:30 pm 1 person likes this post

dellock6 wrote:Hi Chris,
writing new blocks on a VMFS datastore has indeed a huge penalty created by metadata updates of the volume itself. You can read more in this post by Corman Hogan from VMware: http://cormachogan.com/2013/07/18/why-i ... s-so-slow/.
I commented in the post, and hotadd seems not to be affected by this problem. In your tests a hotadd restore is missing, have you tried it?

About san restores, they are a great solution if CBT is not corrupted and you can leverage it to inject changed blocks quickly into a vmdk. No idea instead about how to restore a VM directly in a eager zeroed format...
For this exact reason above with VMFS we are NOT able to get any better peformance than around 150MB/s on restores. Actually, any operation that is doing "zeroing on the fly" within a VMFS datastores will be limited to around 150MB/s. Even writing for the first time onto SSD.

Has Veeam corrected the "issue" from Veeam 7.0 where when you choose "Restore same as source disk" the disk is restored always as Thick Lazy Zero? If there was an option to restore as Eager Zero Thick, the process of first writing out all the zeros, then filling in the restore data is almost twice as fast than writing zeros on the fly. Of course there is a line of diminishing returns depending how over allocated the disk is vs "real" data with in.

http://forums.veeam.com/veeam-backup-re ... 64-15.html

chjones
Expert
Posts: 104
Liked: 27 times
Joined: Oct 30, 2012 7:53 pm
Full Name: Chris Jones
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by chjones » May 07, 2015 1:57 am

jbsengineer wrote: For this exact reason above with VMFS we are NOT able to get any better peformance than around 150MB/s on restores. Actually, any operation that is doing "zeroing on the fly" within a VMFS datastores will be limited to around 150MB/s. Even writing for the first time onto SSD.

Has Veeam corrected the "issue" from Veeam 7.0 where when you choose "Restore same as source disk" the disk is restored always as Thick Lazy Zero? If there was an option to restore as Eager Zero Thick, the process of first writing out all the zeros, then filling in the restore data is almost twice as fast than writing zeros on the fly. Of course there is a line of diminishing returns depending how over allocated the disk is vs "real" data with in.

http://forums.veeam.com/veeam-backup-re ... 64-15.html
I agree that being able to perform a THICK EAGER ZERO restore may be a potential workaround for thin provisioned datastores. We can deploy a 1TB THICK EAGER disk to our 3PAR and it will complete within 5 minutes. Such an overhead to achieve 10x faster restores is perfectly acceptable to me.

darryl
Influencer
Posts: 21
Liked: 3 times
Joined: May 11, 2011 1:37 pm
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by darryl » May 12, 2015 3:37 pm

chjones wrote:Just to comment on this, I am having very similar issues. Restore speeds were constantly around 11-12MB/sec restoring to my HP 3PAR 7400 via SAN, Network or Hot-Add methods. After a lot of testing I found the issue is related to restoring to Thin Provisioned Volumes on the storage. If the volume was thick we'd get around 150MB/sec for restores.

During further testing we found that if we set the ESXi Advanced Setting "VMFS3.EnableBlockDelete" to 1 (Enabled) on the ESXi Host and then ran an unmap command from the host, "esxcli storage vmfs unmap -l (Datastore name) --reclaim-unit=12800", the restore would then achieve the faster speeds of around 150MB/sec.
Unfortunately we also have this issue with our FC-connected 3Par 7400, running 3.2.1 (MU1).

This was a new environment for us. Initial backup and restore testing was fine. We also noticed that NBD was around twice as fast at restores as SAN mode.

After a couple of weeks during which we loaded up the environment, I re-ran restore testing and started getting 12 MB/sec restore rates all over the place. These are to thin provisioned VVs. Restores to a newly provisioned VV were fine, 180+MB/sec.

Running the unmap on a datastore before running a Veeam restore results in 180+MB/sec restores, vs 12 MB/sec.

We have a workaround, doing the unmap before every restore, but this really isn't a great long term solution.

chjones
Expert
Posts: 104
Liked: 27 times
Joined: Oct 30, 2012 7:53 pm
Full Name: Chris Jones
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by chjones » May 12, 2015 10:39 pm

This is the exact same results we have seen Daryl. Running an unmap on a 16TB VV with maybe 5-6TB free before doing a restore takes too long.

What we are doing for now as a workaround is to create a new VV big enough for the restore, restore the VM to it via NBD (since SAN restores are a third the speed for us) and then using VMware to Storage vMotion the VM back to the original volume.

I tested a restore of a VM to a it's original 3PAR Thin VV and it ran at 12MB/sec. I then restored the same VM to a new 3PAR VV and it was over 150MB/sec. I then Storage vMotioned the VM back to the same original thin VV and based on the start and end times of the tasks in vCenter I was able to calculate that the 3PAR itself (since the 3PAR is VAAI capable) moved the VM at over 330MB/sec. This was with doing no unmaps.

It's crazy that the 3PAR can move the data that fast but if Veeam is restoring we get 12MB/sec.

isaako
Service Provider
Posts: 26
Liked: never
Joined: Sep 15, 2010 11:31 am
Full Name: Isaac González
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by isaako » Jun 15, 2015 8:27 am

Hi,

My restore is about 12 - 20 MB/sec.
All my Veeam components are connected to a 10Gb network and the backup source is idle.
I've also tested disabing and enabling /DataMover/HardwareAcceleratedInit

Veeam do you have a solution for this issue?

Isaac

foggy
Veeam Software
Posts: 17915
Liked: 1506 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by foggy » Jun 15, 2015 4:20 pm

Isaac, please contact technical support directly. We need to collect more information regarding this performance issue. Thanks!

agrob
Expert
Posts: 181
Liked: 19 times
Joined: Sep 05, 2011 1:31 pm
Full Name: Andre
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by agrob » Jul 01, 2015 2:18 pm

Are there any generall news on this issue? i'm experience the same problem
Restore to a existing Virtual Volume on a 3Par 7200 with hotadd mode works with about 12MB/s
Restore to a new Virtual Volume on the 3Par works with about 75MB/s. If i repeate restores to this Volume it slows down..
Restore to a Raid 10 Array with 4x 10k SAS Disk local on the ESX Host works with about 85MB/s

Thanks

foggy
Veeam Software
Posts: 17915
Liked: 1506 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by foggy » Jul 01, 2015 2:30 pm

No news as of yet, please contact support for a closer look at your environment.

agrob
Expert
Posts: 181
Liked: 19 times
Joined: Sep 05, 2011 1:31 pm
Full Name: Andre
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by agrob » Jul 02, 2015 7:20 am

Thanks foggy. I'll do some more tests and then open a case.
btw, when i restore i let disk type "same as source"
If i check disk type after restore i have "Thick Provisioning Lazy Zeroed" but original Server has "Thick Provisioning Eager Zeroed"....

foggy
Veeam Software
Posts: 17915
Liked: 1506 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by foggy » Jul 02, 2015 10:12 am

Yes, currently disks are always restored as Lazy Zeroed.

agrob
Expert
Posts: 181
Liked: 19 times
Joined: Sep 05, 2011 1:31 pm
Full Name: Andre
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by agrob » Jul 03, 2015 2:29 pm

i did a lot of testing so far and also changed the backup infrastructure a bit to which i think should give the best restore speed for your environment. For testing i used the following setup

Proxy/Repsoitory is the same VM -> hotadd mode
Source: Repository is a vmdk file on a SAN attached EVA 4400 Lun (Diskgroup with 36 x 15k Disk - there is nearly no other load on this dg)
Destination: is a empty datastore on a SAN attached 3par VV (7200 with 56 sas 10k disk)
Testvm to restore is a 2012 r2 vm with a 60GB Disk C and a 10GB Disk E

if i start the restore now, the proxy/repository vm reads the data over fc from EVA System and writes it over FC (vmware io stack) to the datastore on the 3par. Performance is about 60-70MB/s! In my opinion this should be much faster because if i do storage v-motion beteween EVA and 3Par i get about 200-300MB/s or more. Also there is no 1 gbit nic involved which could be a bottleneck (only the veeam br server is on another location which is 1gbit connected but this should not matter because restore traffic is "inside" the prox/repository vm)
btw if i restore the same vm to another datastore where other vms are running, performance is worse (about 10 or 20mb/s)

there are other threads in this forum affecting restore performance problems. Do we have a generall problem here?
thanks

Christian33
Influencer
Posts: 10
Liked: never
Joined: Dec 29, 2014 8:01 am
Contact:

[MERGED] Slow restore rate

Post by Christian33 » Jul 10, 2015 7:09 am

Hello together,

we have a slow restorerate (about 16 MB/s). We use a physical server as backup server. The server has all veeam roles (proxy,..) and the backups are stored at the local harddrives.
This server is with 10GBit/s connected to the VMware ESX 6 hostserver.

Have you an idea what we need to change at the configuration ?

Thanks for your help.

agrob
Expert
Posts: 181
Liked: 19 times
Joined: Sep 05, 2011 1:31 pm
Full Name: Andre
Contact:

Re: Slow restorerate

Post by agrob » Jul 10, 2015 7:37 am

Hi Christian

There are several Threads here at the moment regarding restore speed. I also have restore spead about 12MB/s if i restore to an existing datastore where other vms are running. I made a lot of tests (Restore NBD, Hotadd, changed Hosts etc). I get better restore rates (about 60-70MB/s - Better but not good) if i restore to a new empty datastore without any other vms running on it. Do you have the oportunity to create a new datastore and do a restore on it?

Regards

Christian33
Influencer
Posts: 10
Liked: never
Joined: Dec 29, 2014 8:01 am
Contact:

Re: Slow restorerate

Post by Christian33 » Jul 10, 2015 8:41 am

Hi agrob,

i have tested the restore to a datastore without running vms. The datastore (local harddrives) is at a esx host server. It is the same result as with the restore to the san.

agrob
Expert
Posts: 181
Liked: 19 times
Joined: Sep 05, 2011 1:31 pm
Full Name: Andre
Contact:

Re: Slow restorerate

Post by agrob » Jul 10, 2015 10:29 am

do you have the possibility to create a new datastore from san storage and restore to this (formatted with latest vmfs)?

Shestakov
Veeam Software
Posts: 6658
Liked: 665 times
Joined: May 21, 2014 11:03 am
Full Name: Nikita Shestakov
Location: Prague
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by Shestakov » Jul 10, 2015 10:55 am

Hello Christian,
I would try to make a restore in a hot-add mode. You need to grant a proxy role to one of VMs.
Please review the thread, it contains a lot of relevant information. Thanks!

chjones
Expert
Posts: 104
Liked: 27 times
Joined: Oct 30, 2012 7:53 pm
Full Name: Chris Jones
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by chjones » Jul 13, 2015 2:16 am 3 people like this post

My support case I had for this has concluded (Case 00911701). Veeam opened a case with VMware and a Veeam KB has been created as a result: http://www.veeam.com/kb2052.

The KB explains why the SAN restores are so slow. Basically it's because Veeam must constantly check with the ESXi host before every block is written to work out which block to write to on the disk next. This causes the slow down. If the restored VM was THICK EAGER then this wouldn't be an issue. When you use Network Mode for a restore the ESXi host preforms the write to the Datastore and it already has knowledge for the layout of the VMDK so doesn't have to keep checking which block to use next.

For now the best solution is to make sure your datastores are OFFLINE on your backup proxy and are not mounted. This allows a SAN mode backup to work, but a restore will failover to Network Mode. You get a warning during a restore that SAN mode is unavailable, but it then fails over network and the speeds are good.

It would be good if the restore wizard prompted which restore mode to use, but for now this is the best we have. At least Veeam and VMware have confirmed the cause and haven't found something we're all doing wrong.

Christian33
Influencer
Posts: 10
Liked: never
Joined: Dec 29, 2014 8:01 am
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by Christian33 » Jul 15, 2015 9:45 am

Hi,

we not restore to the san.
We are using a physical backup server with local harddrives. The backup server has all veeam roles (proxy,..).
We restore a backup to a new datastore at local harddrives from a esx host. This esx host and the datastore has no running vms. This server are connected with 10GBit/s.

The restore speed is about 10 MB/s.

Must we create a vm as backup proxy at the esx host ?

Post Reply

Who is online

Users browsing this forum: Google [Bot] and 16 guests