Host-based backup of VMware vSphere VMs.
emachabert
Veeam Vanguard
Posts: 388
Liked: 168 times
Joined: Nov 17, 2010 11:42 am
Full Name: Eric Machabert
Location: France
Contact:

Re: Direct Storage Access FC - slow

Post by emachabert »

I see you have NL disks , do you use AO ?
Does the VM you try to backup have disks that are spread over different disk type through an AO Policy ?
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
Pat490
Expert
Posts: 170
Liked: 29 times
Joined: Apr 28, 2015 7:18 am
Full Name: Patrick
Location: Germany
Contact:

Re: Direct Storage Access FC - slow

Post by Pat490 »

emachabert wrote:Just follow best practices :
- Windows MPIO configured for 3ParVV
- Fillword set to 3 on Brocade 8Gb fabrics
- All VM disks are Eager thick zeroed
- VVs are thin provisoinned
Sorry if it is stupid question but what is "Fillword"? We also use a Brocade FC Switch, not with 3Par but with NetApp, so maybe this is also interesting setting for me too?
emachabert
Veeam Vanguard
Posts: 388
Liked: 168 times
Joined: Nov 17, 2010 11:42 am
Full Name: Eric Machabert
Location: France
Contact:

Re: Direct Storage Access FC - slow

Post by emachabert »

Have a lot at that blog post : http://www.erwinvanlonden.net/2012/03/f ... ey-needed/
He explains it very well.

When using Brocade switches and 8Gb/s HBAs, you should set the fillword to 3 (99,99% of the time), just check the prerequisite from your storage vendor.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
m1kkel
Enthusiast
Posts: 47
Liked: 1 time
Joined: Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen
Contact:

Re: Direct Storage Access FC - slow

Post by m1kkel »

emachabert wrote:This is a known issue with Brocade fabric @8gb/s.
FIllword should be set to 3 (if ARBF/ARBF fails, use IDLE/ARBF), if not you get bad_os error increasing continuously.

Beware, configuring the fillword will disable/enable the port, so do one port at a time with 5 min pause within each.

Regarding the eager thick zeroed, you should definitively look at the litterature about Thin on Thin, Thin on Thick, Thick on Thick and thick on thin :D
When dealing with a 3par, having hardware assisted thin provisionnig and global wide stripping, you should really consider using Thick on Thin (Eager Zeroed).

One Veeam value about using Thick VM disk is DirectSAN restore and CBT restore !! Think about it !

:D
Thanks for that info, i really appreciate it.
So i fixed everything last night, changed the fillword, and converted my test server to thick eager zeroed.

However, backup speeds are the same (full backup)
There are no errors on my switch anymore.

If i put 4 servers in one FULL backup job, im pushing 400 MB / Sec .. but i should push that amount with just one job i think, all the data is spread on all disks, that's the beauty of 3par.
Can you try to do an active full on one of the systems you are managning, with just one server with one disk, for comparison?
m1kkel
Enthusiast
Posts: 47
Liked: 1 time
Joined: Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen
Contact:

Re: Direct Storage Access FC - slow

Post by m1kkel » 1 person likes this post

BrandonH wrote:Interesting, I'm seeing the same numbers as you (130-180MB/s). I'm using Brocade Condor3/16G (No Fillword), so that's not in my options.

I have two 7400's, 24 SSD, 148 FC, 60 NL.

I have two Proxies, HP DL380G9's (Dual 12 Core, 32G ram) with Brocade/QLogic 1862's with 16GFC/10G Ethernet.

I use Thin VV's with Thick Eager Zeroed.

My 3PAR's are seperated by roughly 66k of DWDM. Two 10g for Ethernet, two 10G FC (Roughly 500ns return). <--This is my only complicating factor

I back up to FC Storage, with a copy job that then moved the job to a StoreOnce appliance for longer retention.

All of my hosts reside on 1862 Brocade's as well, same 16g/10g setup. I use storage snapshots for my backups. My hosts are also running 5.5.

When speaking to support a year or so ago, I was told the speeds I'm getting are normal, that I wouldn't see any faster. I don't have any fabric errors or port errors, everything seems to be running very clean. An active full will net me 188 average. I also get 99% Source, 24% Proxy, 1% Network, 1% Target (That math seems a bit off too rofl).

I'm running Flash Caching, I was taking a large amount of hits. I tend to stay around 25-30% during my peek traffic times.
Hi. :-)
You have an awesome system, and you are getting the same low numbers as me, compared to the systems both of us own.

I changed fillword, and all errors are gone.
I changed my test server to thick eager zeroed.
Ran new back, same speed.

I really dont get it, it seems like there's a cap somewhere.

If i put 4 servers in a new job and run a full, i push 400 MB / Sec, but still.. why the f*** cant i push more than 190 MB out of a single vmdk.

Was thinking of creating a test lun on SSD just to see if that actually changes anything.
I have no QOS setup in my 3par.

Have you tried to run diskspd.exe from your proxy on a ntfs formatted lun you present just for testing? I know it is not the same thing, but just to test to see how much 3par can deliver? Alternatively run it in a vm, then you are throug the vmkernel api...

How much will you get with 4 servers in one job?

Support tell's me that they have never seen anything run 1000 MB / Sec or anywhere near that.. but WHY???
emachabert
Veeam Vanguard
Posts: 388
Liked: 168 times
Joined: Nov 17, 2010 11:42 am
Full Name: Eric Machabert
Location: France
Contact:

Re: Direct Storage Access FC - slow

Post by emachabert » 1 person likes this post

Did a full on a Peer Persistence cluster (2*7200, 48*450GB 10K each)

One VM with one disk of 60GB (only OS is installed):
- Processed: 60GB
- Read : 42,6 GB
- Transfered : 7,9GB
- duration: 03:45
- bottleneck: source
- Processing rate: 592MB/s

Then a Full restore (direct SAN):
- peak speed : 415MB/s
- Processing speed 222MB/s
- duration: 5m21
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
m1kkel
Enthusiast
Posts: 47
Liked: 1 time
Joined: Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen
Contact:

Re: Direct Storage Access FC - slow

Post by m1kkel »

Really cool, could ypu please show me the speed pr. disk aswell.
I am thinking that the processing rate is high because only a small portion on the actual VMDK is data, but i am not sure.. :)
emachabert
Veeam Vanguard
Posts: 388
Liked: 168 times
Joined: Nov 17, 2010 11:42 am
Full Name: Eric Machabert
Location: France
Contact:

Re: Direct Storage Access FC - slow

Post by emachabert »

For sure, most of what was read was blank (eager zeroed).
For the part that wasn't blank, it ran between 280 and 450 MB/s.

Look at my old post : veeam-backup-replication-f2/3par-at-ful ... tml#p99040

You need to have parallel streams to get the best from the system :D
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
m1kkel
Enthusiast
Posts: 47
Liked: 1 time
Joined: Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen
Contact:

Re: Direct Storage Access FC - slow

Post by m1kkel »

Yeah, i already have parallel streams, but my proxy is limited to 4 concurrent threads, since it is only a 4 core xeon.
So it seems like it is possible to push more, with more servers in the same job - but that's just weird that we cant deliver more than 200 MB pr. VMDK.

Also, i do not understand why i cant reach your speed on a single VM!!

Just did a directsan restore of the same VM, Restore Rate 175 MB / Sec, and:
17-03-2016 15:09:13 Restoring Hard disk 1 (95,0 GB): 49,0 GB restored at 132 MB/s

That seems slow, but i've read somewhere that theres an issue with that on 3par, and that i need to create a new lun, correct?
emachabert
Veeam Vanguard
Posts: 388
Liked: 168 times
Joined: Nov 17, 2010 11:42 am
Full Name: Eric Machabert
Location: France
Contact:

Re: Direct Storage Access FC - slow

Post by emachabert »

I am using Intel E5v3 8 or 10 core proxies.

Regarding the new LUN creation, I am not aware of that requirement. At least, I never did that myself.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
m1kkel
Enthusiast
Posts: 47
Liked: 1 time
Joined: Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen
Contact:

Re: Direct Storage Access FC - slow

Post by m1kkel »

Something in regards to veeam, using thick provisioned lazy instead of eager as i remember.
Thread here: vmware-vsphere-f24/slow-restore-speed-2 ... 92-60.html

Question: What is your row and set size for your LD?
BrandonH
Influencer
Posts: 21
Liked: 2 times
Joined: Jul 10, 2013 4:37 pm
Full Name: Brandon Hogue
Contact:

Re: Direct Storage Access FC - slow

Post by BrandonH » 1 person likes this post

We don't use AO. I sort by hand. 90% of my MV's reside on FC. NL drives we use for logging mostly. SSD's are for VDI Testing and flash cache. I have moved VVol's over to them with more or less the same results.

My proxies are E5v3 8C/16Thread (I had to look to remember).
emachabert
Veeam Vanguard
Posts: 388
Liked: 168 times
Joined: Nov 17, 2010 11:42 am
Full Name: Eric Machabert
Location: France
Contact:

Re: Direct Storage Access FC - slow

Post by emachabert » 1 person likes this post

Set size is 5+1.
I have to look for the row size, but everything is at default values (grow increment and so on).

If you have added disks after initial setup, be sure to have run the tunesys command to rebalance the chunklets amongst all disks.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
m1kkel
Enthusiast
Posts: 47
Liked: 1 time
Joined: Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen
Contact:

Re: Direct Storage Access FC - slow

Post by m1kkel »

BrandonH wrote:We don't use AO. I sort by hand. 90% of my MV's reside on FC. NL drives we use for logging mostly. SSD's are for VDI Testing and flash cache. I have moved VVol's over to them with more or less the same results.

My proxies are E5v3 8C/16Thread (I had to look to remember).
Ok well in my opinion, we need to find the difference in you'rs setup and mine, versus emachabert's setup.

Here i am talking about the 3par part, because i think this is where the issue exists. Where else could it be?

I have 32 FC 10 600 GB disks + 8 SSD 480 GB disks, with AO.
CPG is configured like this:
Image

Are there any other settings relevant for this comparison?

I would like to see espically emachabert's settings :D :D :D
lightsout
Expert
Posts: 227
Liked: 62 times
Joined: Apr 10, 2014 4:13 pm
Contact:

Re: Direct Storage Access FC - slow

Post by lightsout »

m1kkel wrote:Yeah, i already have parallel streams, but my proxy is limited to 4 concurrent threads, since it is only a 4 core xeon.
So it seems like it is possible to push more, with more servers in the same job - but that's just weird that we cant deliver more than 200 MB pr. VMDK.

Also, i do not understand why i cant reach your speed on a single VM!!

Just did a directsan restore of the same VM, Restore Rate 175 MB / Sec, and:
17-03-2016 15:09:13 Restoring Hard disk 1 (95,0 GB): 49,0 GB restored at 132 MB/s

That seems slow, but i've read somewhere that theres an issue with that on 3par, and that i need to create a new lun, correct?
I've found this thread very interesting. You're describing the same issue I have here:

veeam-backup-replication-f2/netapp-sour ... 27025.html

Multiple streams I get over 500MB/sec, but a single VMDK, I'm lucky if I go over 100MB/sec! Totally different backend SAN, but the one thing we do have in common is 8Gb FC, although I'm on Nexus 5k storage switches.
m1kkel
Enthusiast
Posts: 47
Liked: 1 time
Joined: Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen
Contact:

Re: Direct Storage Access FC - slow

Post by m1kkel »

A-ha. Interessting! Maybe this is not a storage issue after all, at least, not related to a specific storage system. I just got off the phone with vmware, but unfortunately i do not have SDK Entitlement support, which is where the VDDK library resides, so support can't help me.

As i said before, it seems like the speed is capped somewhere. When we use direct san backup, we are utilizing the VDDK system of vmware, so therefore testing speed on a disk inside a VM may give another result. https://www.vmware.com/support/develope ... notes.html

It is also interessting that emachabert can push much more than the rest of us.
What else can we test / investigate?
lightsout
Expert
Posts: 227
Liked: 62 times
Joined: Apr 10, 2014 4:13 pm
Contact:

Re: Direct Storage Access FC - slow

Post by lightsout »

I actually have multiple NetApp SANs with the same issues, and they are all 8Gb FC/Nexus 5ks. So given what you've seen and what others have seen, my immediate thought is to look at the FC switch!
emachabert
Veeam Vanguard
Posts: 388
Liked: 168 times
Joined: Nov 17, 2010 11:42 am
Full Name: Eric Machabert
Location: France
Contact:

Re: Direct Storage Access FC - slow

Post by emachabert »

Since all the arrays I installed are at different customer locations, I can't give the information as quick as you would like. Next week, I will make a new test on a 7200 using veeam9. I'll give you the results.
Regarding Netapp, I often find the same results as you when I deploy Veeam on that type of array and I haven't been able to push more than 700MB/s (multiple streams) in the biggest setup I have been working with.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
emachabert
Veeam Vanguard
Posts: 388
Liked: 168 times
Joined: Nov 17, 2010 11:42 am
Full Name: Eric Machabert
Location: France
Contact:

Re: Direct Storage Access FC - slow

Post by emachabert » 1 person likes this post

So I did a test on a 7200 with 63*450 10K (RAID 5 5+1)
One VM, with only one 60GB disk with real data Inside (to not read zeroes...).
Processing speed 196MB/s.
Average Read speed: 204 MB/s
Max Read speed: 227MB/s.

Trying with two VM gives me twice those numbers and so on until I cap arround 1GB/s wich seems to be the max for that array.
It is correlated to what I see on another setups (6*200 ~ 1200MB/S)
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
m1kkel
Enthusiast
Posts: 47
Liked: 1 time
Joined: Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen
Contact:

Re: Direct Storage Access FC - slow

Post by m1kkel »

Allright, that means you are seeing the same results as me, and a lot of other people. It seems capped somewhere.

What we all have in common is Veeam (on different versions) and vmWare on different versions. I honestly think this is a limitation in vmware, and not in Veeam. Veeam is just using whatever api and librarys vmware made available to them..
lightsout
Expert
Posts: 227
Liked: 62 times
Joined: Apr 10, 2014 4:13 pm
Contact:

Re: Direct Storage Access FC - slow

Post by lightsout »

I still think there is something odd happening, but yeah, guess it is not clear what is causing it. :( I will post if I ever find out what!
kte
Expert
Posts: 179
Liked: 8 times
Joined: Jul 02, 2013 7:48 pm
Full Name: Koen Teugels
Contact:

Re: Direct Storage Access FC - slow

Post by kte »

1,2 GB/s is the 8 GB FC limit , correct? not sure id veeam pulls on multiple paths? Also the 3PAR gives priority to radom IO.
rhe limit per vmdk is maybe a vmware issue,limitation, there are alot of software that have single stream limitations
emachabert
Veeam Vanguard
Posts: 388
Liked: 168 times
Joined: Nov 17, 2010 11:42 am
Full Name: Eric Machabert
Location: France
Contact:

Re: Direct Storage Access FC - slow

Post by emachabert »

In theory, you could pull 2400MB/s, since 3par is active/active and you are using round robin at host level. But as you said, the Inform OS is acting to maintain overall performance for all VVs and hosts. I'll do a test on a full flash 8000 next month and I'll let you know.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
kte
Expert
Posts: 179
Liked: 8 times
Joined: Jul 02, 2013 7:48 pm
Full Name: Koen Teugels
Contact:

Re: Direct Storage Access FC - slow

Post by kte »

8000 is 16 gbit x4 , so source shouldn't be any issue
emachabert
Veeam Vanguard
Posts: 388
Liked: 168 times
Joined: Nov 17, 2010 11:42 am
Full Name: Eric Machabert
Location: France
Contact:

Re: Direct Storage Access FC - slow

Post by emachabert »

I was more thinking about the per vmdk speed and not the overall speed :D
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
csteb
Lurker
Posts: 2
Liked: never
Joined: Mar 30, 2016 12:05 pm
Contact:

Re: Direct Storage Access FC - slow

Post by csteb »

HI,

i got the same Problem since we updated to v9.
V8 Backup Time for 6,3 TB data via FC - 5 hours
V9 Backup Time for 6,3 TB data via FC - 8 hours
Vitaliy S.
VP, Product Management
Posts: 27055
Liked: 2710 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Direct Storage Access FC - slow

Post by Vitaliy S. »

csteb, the OP is not discussing difference between v8 and v9. It seems like you're facing something different. Did you contact our support team with both job logs for review (from v8 and from v9)?
csteb
Lurker
Posts: 2
Liked: never
Joined: Mar 30, 2016 12:05 pm
Contact:

Re: Direct Storage Access FC - slow

Post by csteb »

That's correct, but the FC performance decreased with V9. There maybe relation.
Ive contacted the Support without success. The ticket is still open.
Vitaliy S.
VP, Product Management
Posts: 27055
Liked: 2710 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Direct Storage Access FC - slow

Post by Vitaliy S. »

Please post your ID, so that we could always reference to it when discussing this internally.
m1kkel
Enthusiast
Posts: 47
Liked: 1 time
Joined: Nov 06, 2014 8:01 pm
Full Name: Mikkel Nielsen
Contact:

Re: Direct Storage Access FC - slow

Post by m1kkel »

My support case is still open, will post results when i have any.

Looking forward hearing from: emachabert on the 3par 8000 results single threaded.

I agree, that this seems like there is a max throughput pr. single treaded stream..
Post Reply

Who is online

Users browsing this forum: No registered users and 79 guests