Incredibly slow Direc SAN restore

VMware specific discussions

Re: Incredible slow Direc SAN restore

Veeam Logoby chjones » Mon May 04, 2015 3:08 am

Hi Nick,

I saw your posts to another Topic I had been active in, vmware-vsphere-f24/slow-restore-speed-27mb-s-tips-ideas-t12892-30.html, and you mentioned you had the same speed issues with SAN Restores. You mention here that this is a bug and there is a fix. Is that confirmed with Veeam Support? I upgraded to 8.0 Patch 2 this morning and tested this issue and still see SAN Restores being about half that of Network and HotAdd restores.

If there is a confirmed fix for this I'll contact support and try and get it and confirm it works for me as well.

Thanks,

Chris
chjones
Enthusiast
 
Posts: 83
Liked: 25 times
Joined: Tue Oct 30, 2012 7:53 pm
Full Name: Chris Jones

Re: Incredible slow Direc SAN restore

Veeam Logoby SyNtAxx » Mon May 04, 2015 7:01 pm

Chris,

There was a bug that I stumbled across when restoring large vms( 900gb+) via the SAN method. The transport would hang/timeout. There was a private patch released for that. I seriously doubt my configuration at this point is to blame for slow SAN restores. My blade cage is direct connected to the 3par v800 via 8 @ 8gbps ports. My physical SAN proxy is direct connected to our SAN directors to eliminate any edge switch issues. Results are consistent no matter what. I have a new colo site coming online with a new 3par array, and when I have a moment i'll test there. If it is still slow then there seems to be an application issue or at the very least an interaction issue at some level. I think if we keep up the heat we might get some additional exposure.

-Nick
SyNtAxx
Expert
 
Posts: 127
Liked: 14 times
Joined: Fri Jan 02, 2015 7:12 pm

Re: Incredible slow Direc SAN restore

Veeam Logoby chjones » Tue May 05, 2015 4:08 am

Nick,

I agree that I don't believe it's your infrastructure. I can write data at up to 800-900MB/sec to a 3PAR Volume presented to my Veeam Proxy which is a Windows 2012 R2 Server on the same model HP Blade Server, and in the same blade enclosure, as my ESXi hosts, but a Veeam SAN Restore is not even a tenth of that speed.

I have a very similar setup to you. 2 x HP c7000 Blade Chassis, each with 2 x FlexFabric 10Gb/24-Port Modules (used to use it for Direct-Attach to the 3PAR for a flat SAN but now just use it for 10GbE Ethernet to our Cisco 6880 Core Switch) and 2 x HP 8Gb 20-Port Fibre Channel Modules that connect to 2 x HP CN3000B 16Gb Fibre Switches (rebadged Brocade Switches). The 3PAR 7400 (4-Node) has an 8Gb FC connection from each controller to each of the two fibre switches. All up each ProLiant BL460c Gen8 Blade sees 8 paths to the 3PAR.

SAN backups work great, very fast (an Active Backup can run over 400-500MB/sec+ if there is no other activity). Network restores are quick at over 140-150MB/sec+ ... however SAN restores are down to 40-50MB/sec. I tested again this morning with a new thin provisioned volume and also a thick provisioned volume using a VBK that was on the local 300GB SAS drives on one of Gen8 Blades that acts as a Veeam Proxy (to take the Veeam Repository out of the equation for speed tests) and got the same results no matter whether the 3PAR volume was thick or thin.

Definitely something going on.

I'll open a case with Veeam ... i'm dreading explaining this issue and the many, many tests that have been done. But fingers crossed.
chjones
Enthusiast
 
Posts: 83
Liked: 25 times
Joined: Tue Oct 30, 2012 7:53 pm
Full Name: Chris Jones

Re: Incredible slow Direc SAN restore

Veeam Logoby SyNtAxx » Tue May 05, 2015 12:57 pm

chjones wrote:Nick,

I agree that I don't believe it's your infrastructure. I can write data at up to 800-900MB/sec to a 3PAR Volume presented to my Veeam Proxy which is a Windows 2012 R2 Server on the same model HP Blade Server, and in the same blade enclosure, as my ESXi hosts, but a Veeam SAN Restore is not even a tenth of that speed.

I have a very similar setup to you. 2 x HP c7000 Blade Chassis, each with 2 x FlexFabric 10Gb/24-Port Modules (used to use it for Direct-Attach to the 3PAR for a flat SAN but now just use it for 10GbE Ethernet to our Cisco 6880 Core Switch) and 2 x HP 8Gb 20-Port Fibre Channel Modules that connect to 2 x HP CN3000B 16Gb Fibre Switches (rebadged Brocade Switches). The 3PAR 7400 (4-Node) has an 8Gb FC connection from each controller to each of the two fibre switches. All up each ProLiant BL460c Gen8 Blade sees 8 paths to the 3PAR.

SAN backups work great, very fast (an Active Backup can run over 400-500MB/sec+ if there is no other activity). Network restores are quick at over 140-150MB/sec+ ... however SAN restores are down to 40-50MB/sec. I tested again this morning with a new thin provisioned volume and also a thick provisioned volume using a VBK that was on the local 300GB SAS drives on one of Gen8 Blades that acts as a Veeam Proxy (to take the Veeam Repository out of the equation for speed tests) and got the same results no matter whether the 3PAR volume was thick or thin.

Definitely something going on.

I'll open a case with Veeam ... i'm dreading explaining this issue and the many, many tests that have been done. But fingers crossed.



Sounds good. Lets keep on it.

-Nick
SyNtAxx
Expert
 
Posts: 127
Liked: 14 times
Joined: Fri Jan 02, 2015 7:12 pm

Re: Incredible slow Direc SAN restore

Veeam Logoby chjones » Fri May 15, 2015 4:15 am

I have support case 00911701 opened about this. Kinda struggling to get them to understand the problem.

The only response so far has been "The Direct SAN Access transport mode can be used to restore VMs with thick disks only. Before VM data is restored, the ESX(i) host needs to allocate space for the restored VM disk on the datastore".

This is fine and makes sense. My comments to this are all of the VMs we back up are THICK EAGER ZEROED, yet when they are restored, even if we select to KEEP SAME AS SOURCE for the disks, the VMs are always restored as THICK LAZY ZEROED so not sure why this would have an impact on the restore speed being one third that of a network restore. A network restore has to write to the same Datastore so you would expect the speed to be roughly the same if that was the issue. Plus, a Network Restore has to write via the Hypervisor where a SAN Restore writes directly to the SAN Volume, bypassing the hypervisor.

I understand there is an overhead on THICK EAGER disks as the storage is zeroed for every block so this doesn’t have to occur on first write, but I can create a 1TB THICK EAGER VMDK on the same Datastore from within vCenter and will complete in a couple of minutes.

I can’t understand why the difference between restoring via the SAN (40-50MB/sec) and the Network (140-150MB/sec) would result in such a large difference.
chjones
Enthusiast
 
Posts: 83
Liked: 25 times
Joined: Tue Oct 30, 2012 7:53 pm
Full Name: Chris Jones

Re: Incredible slow Direc SAN restore

Veeam Logoby chjones » Mon May 25, 2015 7:59 pm 1 person likes this post

Well, finally making some progress with this case. At least Veeam Support have now been able to reproduce the same issue.

Support had me download the VDDK, which are the libraries used to access VMware Storage, and run a few tests. If I write to a VMDK using network mode the speed is over 250MB/sec. If I use SAN restore the speed plummets to 60MB/sec. The VMDK was THICK LAZY ZEROED. However, if the VMDK I am writing to is THICK EAGER ZEROED then the speed of the VDDK tests rivals the 250MB+/sec that I see with a network mode restore.

It's the same conclusion that we have come to ourselves, that a thick eager VM is always restored as thin lazy by Veeam, and this is causing the slow SAN restore speed issues. I asked the question to support and they confirmed that Veeam itself sets the disk to lazy regardless of what was backed up.

At least the case is making progress as I've been advised that Veeam are looking into the eager/lazy issue and have been able to reproduce the same results. Hopefully there is a resolution forthcoming.
chjones
Enthusiast
 
Posts: 83
Liked: 25 times
Joined: Tue Oct 30, 2012 7:53 pm
Full Name: Chris Jones

Re: Incredible slow Direc SAN restore

Veeam Logoby SyNtAxx » Tue May 26, 2015 1:35 pm

Good to hear, I wasn't able to make any progress with them as the tool wasn't working when they attempted the same tests.

-Nick
SyNtAxx
Expert
 
Posts: 127
Liked: 14 times
Joined: Fri Jan 02, 2015 7:12 pm

Re: Incredible slow Direc SAN restore

Veeam Logoby chjones » Tue Jun 02, 2015 3:24 am

Veeam have now opened a case with VMware, as they see the same results internally, and I've given them permission to hand over my details to VMware if they wish to contact me regarding the issue. Fingers crossed.
chjones
Enthusiast
 
Posts: 83
Liked: 25 times
Joined: Tue Oct 30, 2012 7:53 pm
Full Name: Chris Jones

Re: Incredible slow Direc SAN restore

Veeam Logoby dmitri-va » Mon Jun 22, 2015 5:46 pm

chjones wrote:Veeam have now opened a case with VMware, as they see the same results internally, and I've given them permission to hand over my details to VMware if they wish to contact me regarding the issue. Fingers crossed.


Just came across the same issue with my Direct SAN restore testing. Is there any update on your case?
dmitri-va
Enthusiast
 
Posts: 49
Liked: 3 times
Joined: Mon Jun 01, 2015 1:28 pm
Full Name: Dmitri

Re: Incredible slow Direc SAN restore

Veeam Logoby Vitaliy S. » Tue Jun 23, 2015 10:10 am

Hi Dmitri,

Can you please give us a bit more details on our setup? What disks were you restoring? What was the connection and what performance rates you had?

Thanks!
Vitaliy S.
Veeam Software
 
Posts: 19558
Liked: 1102 times
Joined: Mon Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov

Re: Incredible slow Direc SAN restore

Veeam Logoby dmitri-va » Tue Jun 23, 2015 2:50 pm

Vitaliy S. wrote:Hi Dmitri,

Can you please give us a bit more details on our setup? What disks were you restoring? What was the connection and what performance rates you had?

Thanks!



I have VeeamB v8.0.0.2021 on a physical server (Dell R720) with dedicated 10GbE connection to Compellent SAN over iSCSI

The average throughput for Direct SAN backups is ~300MB/s, however the direct SAN restore I tested of a 'thick eager zeroed' VM, was only 75MB/s.

I can't compare it with restoring of the same VM using network mode yet, but once I get a 10GbE connection for it, I will.
dmitri-va
Enthusiast
 
Posts: 49
Liked: 3 times
Joined: Mon Jun 01, 2015 1:28 pm
Full Name: Dmitri

Re: Incredible slow Direc SAN restore

Veeam Logoby Vitaliy S. » Tue Jun 23, 2015 4:15 pm

What about using hotadd mode for restoring the entire VM image? If you have a virtual proxy server, then you can run a restore job through it and then compare the restore job performance.
Vitaliy S.
Veeam Software
 
Posts: 19558
Liked: 1102 times
Joined: Mon Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov

Re: Incredible slow Direc SAN restore

Veeam Logoby dmitri-va » Tue Jun 23, 2015 5:10 pm

Vitaliy S. wrote:What about using hotadd mode for restoring the entire VM image? If you have a virtual proxy server, then you can run a restore job through it and then compare the restore job performance.


I'd like to stay with the physical veeam deployment to have the backup infrastructure de-coupled from the vmware cluster.

So, do I understand this correct that the slower direct san restores issue is due to VMs being restored as thick lazy zeroed, no matter what the original disk was and that this is vmware limitation? Or is it something else?
dmitri-va
Enthusiast
 
Posts: 49
Liked: 3 times
Joined: Mon Jun 01, 2015 1:28 pm
Full Name: Dmitri

Re: Incredible slow Direc SAN restore

Veeam Logoby foggy » Wed Jun 24, 2015 5:04 pm

dmitri-va wrote:So, do I understand this correct that the slower direct san restores issue is due to VMs being restored as thick lazy zeroed, no matter what the original disk was and that this is vmware limitation?

That is correct except it is not a VMware limitation.
foggy
Veeam Software
 
Posts: 14742
Liked: 1079 times
Joined: Mon Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson

Re: Incredible slow Direc SAN restore

Veeam Logoby dmitri-va » Wed Jun 24, 2015 5:59 pm

oh, ok. Somewhere earlier in the thread it was mentioned that once veeam opened its own case, they were also opening a vmware case, so I've figured that's because the issue was traced to some vmware bug or limitation...

Any plans on a roadmap to fix it, or is the solution just to switch to hotadd or network mode for restores?
dmitri-va
Enthusiast
 
Posts: 49
Liked: 3 times
Joined: Mon Jun 01, 2015 1:28 pm
Full Name: Dmitri

PreviousNext

Return to VMware vSphere



Who is online

Users browsing this forum: No registered users and 16 guests