Direct Storage Access FC - slow

VMware specific discussions

Re: Direct Storage Access FC - slow

Veeam Logoby emachabert » Thu Mar 17, 2016 9:11 am

I see you have NL disks , do you use AO ?
Does the VM you try to backup have disks that are spread over different disk type through an AO Policy ?
Veeamizing your IT since 2009/ Vanguard 2015,2016,2017
emachabert
Veeam Vanguard
 
Posts: 355
Liked: 163 times
Joined: Wed Nov 17, 2010 11:42 am
Location: France
Full Name: Eric Machabert

Re: Direct Storage Access FC - slow

Veeam Logoby Pat490 » Thu Mar 17, 2016 9:59 am

emachabert wrote:Just follow best practices :
- Windows MPIO configured for 3ParVV
- Fillword set to 3 on Brocade 8Gb fabrics
- All VM disks are Eager thick zeroed
- VVs are thin provisoinned

Sorry if it is stupid question but what is "Fillword"? We also use a Brocade FC Switch, not with 3Par but with NetApp, so maybe this is also interesting setting for me too?
Pat490
Expert
 
Posts: 136
Liked: 24 times
Joined: Tue Apr 28, 2015 7:18 am
Location: Germany
Full Name: Patrick

Re: Direct Storage Access FC - slow

Veeam Logoby emachabert » Thu Mar 17, 2016 10:34 am

Have a lot at that blog post : http://www.erwinvanlonden.net/2012/03/f ... ey-needed/
He explains it very well.

When using Brocade switches and 8Gb/s HBAs, you should set the fillword to 3 (99,99% of the time), just check the prerequisite from your storage vendor.
Veeamizing your IT since 2009/ Vanguard 2015,2016,2017
emachabert
Veeam Vanguard
 
Posts: 355
Liked: 163 times
Joined: Wed Nov 17, 2010 11:42 am
Location: France
Full Name: Eric Machabert

Re: Direct Storage Access FC - slow

Veeam Logoby m1kkel » Thu Mar 17, 2016 1:43 pm

emachabert wrote:This is a known issue with Brocade fabric @8gb/s.
FIllword should be set to 3 (if ARBF/ARBF fails, use IDLE/ARBF), if not you get bad_os error increasing continuously.

Beware, configuring the fillword will disable/enable the port, so do one port at a time with 5 min pause within each.

Regarding the eager thick zeroed, you should definitively look at the litterature about Thin on Thin, Thin on Thick, Thick on Thick and thick on thin :D
When dealing with a 3par, having hardware assisted thin provisionnig and global wide stripping, you should really consider using Thick on Thin (Eager Zeroed).

One Veeam value about using Thick VM disk is DirectSAN restore and CBT restore !! Think about it !

:D


Thanks for that info, i really appreciate it.
So i fixed everything last night, changed the fillword, and converted my test server to thick eager zeroed.

However, backup speeds are the same (full backup)
There are no errors on my switch anymore.

If i put 4 servers in one FULL backup job, im pushing 400 MB / Sec .. but i should push that amount with just one job i think, all the data is spread on all disks, that's the beauty of 3par.
Can you try to do an active full on one of the systems you are managning, with just one server with one disk, for comparison?
m1kkel
Enthusiast
 
Posts: 47
Liked: 1 time
Joined: Thu Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen

Re: Direct Storage Access FC - slow

Veeam Logoby m1kkel » Thu Mar 17, 2016 1:52 pm 1 person likes this post

BrandonH wrote:Interesting, I'm seeing the same numbers as you (130-180MB/s). I'm using Brocade Condor3/16G (No Fillword), so that's not in my options.

I have two 7400's, 24 SSD, 148 FC, 60 NL.

I have two Proxies, HP DL380G9's (Dual 12 Core, 32G ram) with Brocade/QLogic 1862's with 16GFC/10G Ethernet.

I use Thin VV's with Thick Eager Zeroed.

My 3PAR's are seperated by roughly 66k of DWDM. Two 10g for Ethernet, two 10G FC (Roughly 500ns return). <--This is my only complicating factor

I back up to FC Storage, with a copy job that then moved the job to a StoreOnce appliance for longer retention.

All of my hosts reside on 1862 Brocade's as well, same 16g/10g setup. I use storage snapshots for my backups. My hosts are also running 5.5.

When speaking to support a year or so ago, I was told the speeds I'm getting are normal, that I wouldn't see any faster. I don't have any fabric errors or port errors, everything seems to be running very clean. An active full will net me 188 average. I also get 99% Source, 24% Proxy, 1% Network, 1% Target (That math seems a bit off too rofl).

I'm running Flash Caching, I was taking a large amount of hits. I tend to stay around 25-30% during my peek traffic times.


Hi. :-)
You have an awesome system, and you are getting the same low numbers as me, compared to the systems both of us own.

I changed fillword, and all errors are gone.
I changed my test server to thick eager zeroed.
Ran new back, same speed.

I really dont get it, it seems like there's a cap somewhere.

If i put 4 servers in a new job and run a full, i push 400 MB / Sec, but still.. why the f*** cant i push more than 190 MB out of a single vmdk.

Was thinking of creating a test lun on SSD just to see if that actually changes anything.
I have no QOS setup in my 3par.

Have you tried to run diskspd.exe from your proxy on a ntfs formatted lun you present just for testing? I know it is not the same thing, but just to test to see how much 3par can deliver? Alternatively run it in a vm, then you are throug the vmkernel api...

How much will you get with 4 servers in one job?

Support tell's me that they have never seen anything run 1000 MB / Sec or anywhere near that.. but WHY???
m1kkel
Enthusiast
 
Posts: 47
Liked: 1 time
Joined: Thu Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen

Re: Direct Storage Access FC - slow

Veeam Logoby emachabert » Thu Mar 17, 2016 1:59 pm 1 person likes this post

Did a full on a Peer Persistence cluster (2*7200, 48*450GB 10K each)

One VM with one disk of 60GB (only OS is installed):
- Processed: 60GB
- Read : 42,6 GB
- Transfered : 7,9GB
- duration: 03:45
- bottleneck: source
- Processing rate: 592MB/s

Then a Full restore (direct SAN):
- peak speed : 415MB/s
- Processing speed 222MB/s
- duration: 5m21
Veeamizing your IT since 2009/ Vanguard 2015,2016,2017
emachabert
Veeam Vanguard
 
Posts: 355
Liked: 163 times
Joined: Wed Nov 17, 2010 11:42 am
Location: France
Full Name: Eric Machabert

Re: Direct Storage Access FC - slow

Veeam Logoby m1kkel » Thu Mar 17, 2016 2:01 pm

Really cool, could ypu please show me the speed pr. disk aswell.
I am thinking that the processing rate is high because only a small portion on the actual VMDK is data, but i am not sure.. :)
m1kkel
Enthusiast
 
Posts: 47
Liked: 1 time
Joined: Thu Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen

Re: Direct Storage Access FC - slow

Veeam Logoby emachabert » Thu Mar 17, 2016 2:11 pm

For sure, most of what was read was blank (eager zeroed).
For the part that wasn't blank, it ran between 280 and 450 MB/s.

Look at my old post : veeam-backup-replication-f2/3par-at-full-speed-t19926.html#p99040

You need to have parallel streams to get the best from the system :D
Veeamizing your IT since 2009/ Vanguard 2015,2016,2017
emachabert
Veeam Vanguard
 
Posts: 355
Liked: 163 times
Joined: Wed Nov 17, 2010 11:42 am
Location: France
Full Name: Eric Machabert

Re: Direct Storage Access FC - slow

Veeam Logoby m1kkel » Thu Mar 17, 2016 2:21 pm

Yeah, i already have parallel streams, but my proxy is limited to 4 concurrent threads, since it is only a 4 core xeon.
So it seems like it is possible to push more, with more servers in the same job - but that's just weird that we cant deliver more than 200 MB pr. VMDK.

Also, i do not understand why i cant reach your speed on a single VM!!

Just did a directsan restore of the same VM, Restore Rate 175 MB / Sec, and:
17-03-2016 15:09:13 Restoring Hard disk 1 (95,0 GB): 49,0 GB restored at 132 MB/s

That seems slow, but i've read somewhere that theres an issue with that on 3par, and that i need to create a new lun, correct?
m1kkel
Enthusiast
 
Posts: 47
Liked: 1 time
Joined: Thu Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen

Re: Direct Storage Access FC - slow

Veeam Logoby emachabert » Thu Mar 17, 2016 2:24 pm

I am using Intel E5v3 8 or 10 core proxies.

Regarding the new LUN creation, I am not aware of that requirement. At least, I never did that myself.
Veeamizing your IT since 2009/ Vanguard 2015,2016,2017
emachabert
Veeam Vanguard
 
Posts: 355
Liked: 163 times
Joined: Wed Nov 17, 2010 11:42 am
Location: France
Full Name: Eric Machabert

Re: Direct Storage Access FC - slow

Veeam Logoby m1kkel » Thu Mar 17, 2016 2:38 pm

Something in regards to veeam, using thick provisioned lazy instead of eager as i remember.
Thread here: vmware-vsphere-f24/slow-restore-speed-27mb-s-tips-ideas-t12892-60.html

Question: What is your row and set size for your LD?
m1kkel
Enthusiast
 
Posts: 47
Liked: 1 time
Joined: Thu Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen

Re: Direct Storage Access FC - slow

Veeam Logoby BrandonH » Thu Mar 17, 2016 5:24 pm 1 person likes this post

We don't use AO. I sort by hand. 90% of my MV's reside on FC. NL drives we use for logging mostly. SSD's are for VDI Testing and flash cache. I have moved VVol's over to them with more or less the same results.

My proxies are E5v3 8C/16Thread (I had to look to remember).
BrandonH
Influencer
 
Posts: 21
Liked: 2 times
Joined: Wed Jul 10, 2013 4:37 pm
Full Name: Brandon Hogue

Re: Direct Storage Access FC - slow

Veeam Logoby emachabert » Thu Mar 17, 2016 6:38 pm 1 person likes this post

Set size is 5+1.
I have to look for the row size, but everything is at default values (grow increment and so on).

If you have added disks after initial setup, be sure to have run the tunesys command to rebalance the chunklets amongst all disks.
Veeamizing your IT since 2009/ Vanguard 2015,2016,2017
emachabert
Veeam Vanguard
 
Posts: 355
Liked: 163 times
Joined: Wed Nov 17, 2010 11:42 am
Location: France
Full Name: Eric Machabert

Re: Direct Storage Access FC - slow

Veeam Logoby m1kkel » Fri Mar 18, 2016 1:22 pm

BrandonH wrote:We don't use AO. I sort by hand. 90% of my MV's reside on FC. NL drives we use for logging mostly. SSD's are for VDI Testing and flash cache. I have moved VVol's over to them with more or less the same results.

My proxies are E5v3 8C/16Thread (I had to look to remember).


Ok well in my opinion, we need to find the difference in you'rs setup and mine, versus emachabert's setup.

Here i am talking about the 3par part, because i think this is where the issue exists. Where else could it be?

I have 32 FC 10 600 GB disks + 8 SSD 480 GB disks, with AO.
CPG is configured like this:
Image

Are there any other settings relevant for this comparison?

I would like to see espically emachabert's settings :D :D :D
m1kkel
Enthusiast
 
Posts: 47
Liked: 1 time
Joined: Thu Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen

Re: Direct Storage Access FC - slow

Veeam Logoby lightsout » Fri Mar 18, 2016 1:27 pm

m1kkel wrote:Yeah, i already have parallel streams, but my proxy is limited to 4 concurrent threads, since it is only a 4 core xeon.
So it seems like it is possible to push more, with more servers in the same job - but that's just weird that we cant deliver more than 200 MB pr. VMDK.

Also, i do not understand why i cant reach your speed on a single VM!!

Just did a directsan restore of the same VM, Restore Rate 175 MB / Sec, and:
17-03-2016 15:09:13 Restoring Hard disk 1 (95,0 GB): 49,0 GB restored at 132 MB/s

That seems slow, but i've read somewhere that theres an issue with that on 3par, and that i need to create a new lun, correct?


I've found this thread very interesting. You're describing the same issue I have here:

https://forums.veeam.com/veeam-backup-replication-f2/netapp-source-as-bottleneck-t27025.html

Multiple streams I get over 500MB/sec, but a single VMDK, I'm lucky if I go over 100MB/sec! Totally different backend SAN, but the one thing we do have in common is 8Gb FC, although I'm on Nexus 5k storage switches.
lightsout
Expert
 
Posts: 186
Liked: 47 times
Joined: Thu Apr 10, 2014 4:13 pm

PreviousNext

Return to VMware vSphere



Who is online

Users browsing this forum: No registered users and 17 guests