-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Direct Storage Access FC - slow
I see you have NL disks , do you use AO ?
Does the VM you try to backup have disks that are spread over different disk type through an AO Policy ?
Does the VM you try to backup have disks that are spread over different disk type through an AO Policy ?
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Expert
- Posts: 170
- Liked: 29 times
- Joined: Apr 28, 2015 7:18 am
- Full Name: Patrick
- Location: Germany
- Contact:
Re: Direct Storage Access FC - slow
Sorry if it is stupid question but what is "Fillword"? We also use a Brocade FC Switch, not with 3Par but with NetApp, so maybe this is also interesting setting for me too?emachabert wrote:Just follow best practices :
- Windows MPIO configured for 3ParVV
- Fillword set to 3 on Brocade 8Gb fabrics
- All VM disks are Eager thick zeroed
- VVs are thin provisoinned
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Direct Storage Access FC - slow
Have a lot at that blog post : http://www.erwinvanlonden.net/2012/03/f ... ey-needed/
He explains it very well.
When using Brocade switches and 8Gb/s HBAs, you should set the fillword to 3 (99,99% of the time), just check the prerequisite from your storage vendor.
He explains it very well.
When using Brocade switches and 8Gb/s HBAs, you should set the fillword to 3 (99,99% of the time), just check the prerequisite from your storage vendor.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Enthusiast
- Posts: 47
- Liked: 1 time
- Joined: Nov 06, 2014 8:01 pm
- Full Name: Mikkel Nielsen
- Contact:
Re: Direct Storage Access FC - slow
Thanks for that info, i really appreciate it.emachabert wrote:This is a known issue with Brocade fabric @8gb/s.
FIllword should be set to 3 (if ARBF/ARBF fails, use IDLE/ARBF), if not you get bad_os error increasing continuously.
Beware, configuring the fillword will disable/enable the port, so do one port at a time with 5 min pause within each.
Regarding the eager thick zeroed, you should definitively look at the litterature about Thin on Thin, Thin on Thick, Thick on Thick and thick on thin
When dealing with a 3par, having hardware assisted thin provisionnig and global wide stripping, you should really consider using Thick on Thin (Eager Zeroed).
One Veeam value about using Thick VM disk is DirectSAN restore and CBT restore !! Think about it !
So i fixed everything last night, changed the fillword, and converted my test server to thick eager zeroed.
However, backup speeds are the same (full backup)
There are no errors on my switch anymore.
If i put 4 servers in one FULL backup job, im pushing 400 MB / Sec .. but i should push that amount with just one job i think, all the data is spread on all disks, that's the beauty of 3par.
Can you try to do an active full on one of the systems you are managning, with just one server with one disk, for comparison?
-
- Enthusiast
- Posts: 47
- Liked: 1 time
- Joined: Nov 06, 2014 8:01 pm
- Full Name: Mikkel Nielsen
- Contact:
Re: Direct Storage Access FC - slow
Hi.BrandonH wrote:Interesting, I'm seeing the same numbers as you (130-180MB/s). I'm using Brocade Condor3/16G (No Fillword), so that's not in my options.
I have two 7400's, 24 SSD, 148 FC, 60 NL.
I have two Proxies, HP DL380G9's (Dual 12 Core, 32G ram) with Brocade/QLogic 1862's with 16GFC/10G Ethernet.
I use Thin VV's with Thick Eager Zeroed.
My 3PAR's are seperated by roughly 66k of DWDM. Two 10g for Ethernet, two 10G FC (Roughly 500ns return). <--This is my only complicating factor
I back up to FC Storage, with a copy job that then moved the job to a StoreOnce appliance for longer retention.
All of my hosts reside on 1862 Brocade's as well, same 16g/10g setup. I use storage snapshots for my backups. My hosts are also running 5.5.
When speaking to support a year or so ago, I was told the speeds I'm getting are normal, that I wouldn't see any faster. I don't have any fabric errors or port errors, everything seems to be running very clean. An active full will net me 188 average. I also get 99% Source, 24% Proxy, 1% Network, 1% Target (That math seems a bit off too rofl).
I'm running Flash Caching, I was taking a large amount of hits. I tend to stay around 25-30% during my peek traffic times.
You have an awesome system, and you are getting the same low numbers as me, compared to the systems both of us own.
I changed fillword, and all errors are gone.
I changed my test server to thick eager zeroed.
Ran new back, same speed.
I really dont get it, it seems like there's a cap somewhere.
If i put 4 servers in a new job and run a full, i push 400 MB / Sec, but still.. why the f*** cant i push more than 190 MB out of a single vmdk.
Was thinking of creating a test lun on SSD just to see if that actually changes anything.
I have no QOS setup in my 3par.
Have you tried to run diskspd.exe from your proxy on a ntfs formatted lun you present just for testing? I know it is not the same thing, but just to test to see how much 3par can deliver? Alternatively run it in a vm, then you are throug the vmkernel api...
How much will you get with 4 servers in one job?
Support tell's me that they have never seen anything run 1000 MB / Sec or anywhere near that.. but WHY???
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Direct Storage Access FC - slow
Did a full on a Peer Persistence cluster (2*7200, 48*450GB 10K each)
One VM with one disk of 60GB (only OS is installed):
- Processed: 60GB
- Read : 42,6 GB
- Transfered : 7,9GB
- duration: 03:45
- bottleneck: source
- Processing rate: 592MB/s
Then a Full restore (direct SAN):
- peak speed : 415MB/s
- Processing speed 222MB/s
- duration: 5m21
One VM with one disk of 60GB (only OS is installed):
- Processed: 60GB
- Read : 42,6 GB
- Transfered : 7,9GB
- duration: 03:45
- bottleneck: source
- Processing rate: 592MB/s
Then a Full restore (direct SAN):
- peak speed : 415MB/s
- Processing speed 222MB/s
- duration: 5m21
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Enthusiast
- Posts: 47
- Liked: 1 time
- Joined: Nov 06, 2014 8:01 pm
- Full Name: Mikkel Nielsen
- Contact:
Re: Direct Storage Access FC - slow
Really cool, could ypu please show me the speed pr. disk aswell.
I am thinking that the processing rate is high because only a small portion on the actual VMDK is data, but i am not sure..
I am thinking that the processing rate is high because only a small portion on the actual VMDK is data, but i am not sure..
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Direct Storage Access FC - slow
For sure, most of what was read was blank (eager zeroed).
For the part that wasn't blank, it ran between 280 and 450 MB/s.
Look at my old post : veeam-backup-replication-f2/3par-at-ful ... tml#p99040
You need to have parallel streams to get the best from the system
For the part that wasn't blank, it ran between 280 and 450 MB/s.
Look at my old post : veeam-backup-replication-f2/3par-at-ful ... tml#p99040
You need to have parallel streams to get the best from the system
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Enthusiast
- Posts: 47
- Liked: 1 time
- Joined: Nov 06, 2014 8:01 pm
- Full Name: Mikkel Nielsen
- Contact:
Re: Direct Storage Access FC - slow
Yeah, i already have parallel streams, but my proxy is limited to 4 concurrent threads, since it is only a 4 core xeon.
So it seems like it is possible to push more, with more servers in the same job - but that's just weird that we cant deliver more than 200 MB pr. VMDK.
Also, i do not understand why i cant reach your speed on a single VM!!
Just did a directsan restore of the same VM, Restore Rate 175 MB / Sec, and:
17-03-2016 15:09:13 Restoring Hard disk 1 (95,0 GB): 49,0 GB restored at 132 MB/s
That seems slow, but i've read somewhere that theres an issue with that on 3par, and that i need to create a new lun, correct?
So it seems like it is possible to push more, with more servers in the same job - but that's just weird that we cant deliver more than 200 MB pr. VMDK.
Also, i do not understand why i cant reach your speed on a single VM!!
Just did a directsan restore of the same VM, Restore Rate 175 MB / Sec, and:
17-03-2016 15:09:13 Restoring Hard disk 1 (95,0 GB): 49,0 GB restored at 132 MB/s
That seems slow, but i've read somewhere that theres an issue with that on 3par, and that i need to create a new lun, correct?
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Direct Storage Access FC - slow
I am using Intel E5v3 8 or 10 core proxies.
Regarding the new LUN creation, I am not aware of that requirement. At least, I never did that myself.
Regarding the new LUN creation, I am not aware of that requirement. At least, I never did that myself.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Enthusiast
- Posts: 47
- Liked: 1 time
- Joined: Nov 06, 2014 8:01 pm
- Full Name: Mikkel Nielsen
- Contact:
Re: Direct Storage Access FC - slow
Something in regards to veeam, using thick provisioned lazy instead of eager as i remember.
Thread here: vmware-vsphere-f24/slow-restore-speed-2 ... 92-60.html
Question: What is your row and set size for your LD?
Thread here: vmware-vsphere-f24/slow-restore-speed-2 ... 92-60.html
Question: What is your row and set size for your LD?
-
- Influencer
- Posts: 21
- Liked: 2 times
- Joined: Jul 10, 2013 4:37 pm
- Full Name: Brandon Hogue
- Contact:
Re: Direct Storage Access FC - slow
We don't use AO. I sort by hand. 90% of my MV's reside on FC. NL drives we use for logging mostly. SSD's are for VDI Testing and flash cache. I have moved VVol's over to them with more or less the same results.
My proxies are E5v3 8C/16Thread (I had to look to remember).
My proxies are E5v3 8C/16Thread (I had to look to remember).
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Direct Storage Access FC - slow
Set size is 5+1.
I have to look for the row size, but everything is at default values (grow increment and so on).
If you have added disks after initial setup, be sure to have run the tunesys command to rebalance the chunklets amongst all disks.
I have to look for the row size, but everything is at default values (grow increment and so on).
If you have added disks after initial setup, be sure to have run the tunesys command to rebalance the chunklets amongst all disks.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Enthusiast
- Posts: 47
- Liked: 1 time
- Joined: Nov 06, 2014 8:01 pm
- Full Name: Mikkel Nielsen
- Contact:
Re: Direct Storage Access FC - slow
Ok well in my opinion, we need to find the difference in you'rs setup and mine, versus emachabert's setup.BrandonH wrote:We don't use AO. I sort by hand. 90% of my MV's reside on FC. NL drives we use for logging mostly. SSD's are for VDI Testing and flash cache. I have moved VVol's over to them with more or less the same results.
My proxies are E5v3 8C/16Thread (I had to look to remember).
Here i am talking about the 3par part, because i think this is where the issue exists. Where else could it be?
I have 32 FC 10 600 GB disks + 8 SSD 480 GB disks, with AO.
CPG is configured like this:
Are there any other settings relevant for this comparison?
I would like to see espically emachabert's settings
-
- Expert
- Posts: 227
- Liked: 62 times
- Joined: Apr 10, 2014 4:13 pm
- Contact:
Re: Direct Storage Access FC - slow
I've found this thread very interesting. You're describing the same issue I have here:m1kkel wrote:Yeah, i already have parallel streams, but my proxy is limited to 4 concurrent threads, since it is only a 4 core xeon.
So it seems like it is possible to push more, with more servers in the same job - but that's just weird that we cant deliver more than 200 MB pr. VMDK.
Also, i do not understand why i cant reach your speed on a single VM!!
Just did a directsan restore of the same VM, Restore Rate 175 MB / Sec, and:
17-03-2016 15:09:13 Restoring Hard disk 1 (95,0 GB): 49,0 GB restored at 132 MB/s
That seems slow, but i've read somewhere that theres an issue with that on 3par, and that i need to create a new lun, correct?
veeam-backup-replication-f2/netapp-sour ... 27025.html
Multiple streams I get over 500MB/sec, but a single VMDK, I'm lucky if I go over 100MB/sec! Totally different backend SAN, but the one thing we do have in common is 8Gb FC, although I'm on Nexus 5k storage switches.
-
- Enthusiast
- Posts: 47
- Liked: 1 time
- Joined: Nov 06, 2014 8:01 pm
- Full Name: Mikkel Nielsen
- Contact:
Re: Direct Storage Access FC - slow
A-ha. Interessting! Maybe this is not a storage issue after all, at least, not related to a specific storage system. I just got off the phone with vmware, but unfortunately i do not have SDK Entitlement support, which is where the VDDK library resides, so support can't help me.
As i said before, it seems like the speed is capped somewhere. When we use direct san backup, we are utilizing the VDDK system of vmware, so therefore testing speed on a disk inside a VM may give another result. https://www.vmware.com/support/develope ... notes.html
It is also interessting that emachabert can push much more than the rest of us.
What else can we test / investigate?
As i said before, it seems like the speed is capped somewhere. When we use direct san backup, we are utilizing the VDDK system of vmware, so therefore testing speed on a disk inside a VM may give another result. https://www.vmware.com/support/develope ... notes.html
It is also interessting that emachabert can push much more than the rest of us.
What else can we test / investigate?
-
- Expert
- Posts: 227
- Liked: 62 times
- Joined: Apr 10, 2014 4:13 pm
- Contact:
Re: Direct Storage Access FC - slow
I actually have multiple NetApp SANs with the same issues, and they are all 8Gb FC/Nexus 5ks. So given what you've seen and what others have seen, my immediate thought is to look at the FC switch!
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Direct Storage Access FC - slow
Since all the arrays I installed are at different customer locations, I can't give the information as quick as you would like. Next week, I will make a new test on a 7200 using veeam9. I'll give you the results.
Regarding Netapp, I often find the same results as you when I deploy Veeam on that type of array and I haven't been able to push more than 700MB/s (multiple streams) in the biggest setup I have been working with.
Regarding Netapp, I often find the same results as you when I deploy Veeam on that type of array and I haven't been able to push more than 700MB/s (multiple streams) in the biggest setup I have been working with.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Direct Storage Access FC - slow
So I did a test on a 7200 with 63*450 10K (RAID 5 5+1)
One VM, with only one 60GB disk with real data Inside (to not read zeroes...).
Processing speed 196MB/s.
Average Read speed: 204 MB/s
Max Read speed: 227MB/s.
Trying with two VM gives me twice those numbers and so on until I cap arround 1GB/s wich seems to be the max for that array.
It is correlated to what I see on another setups (6*200 ~ 1200MB/S)
One VM, with only one 60GB disk with real data Inside (to not read zeroes...).
Processing speed 196MB/s.
Average Read speed: 204 MB/s
Max Read speed: 227MB/s.
Trying with two VM gives me twice those numbers and so on until I cap arround 1GB/s wich seems to be the max for that array.
It is correlated to what I see on another setups (6*200 ~ 1200MB/S)
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Enthusiast
- Posts: 47
- Liked: 1 time
- Joined: Nov 06, 2014 8:01 pm
- Full Name: Mikkel Nielsen
- Contact:
Re: Direct Storage Access FC - slow
Allright, that means you are seeing the same results as me, and a lot of other people. It seems capped somewhere.
What we all have in common is Veeam (on different versions) and vmWare on different versions. I honestly think this is a limitation in vmware, and not in Veeam. Veeam is just using whatever api and librarys vmware made available to them..
What we all have in common is Veeam (on different versions) and vmWare on different versions. I honestly think this is a limitation in vmware, and not in Veeam. Veeam is just using whatever api and librarys vmware made available to them..
-
- Expert
- Posts: 227
- Liked: 62 times
- Joined: Apr 10, 2014 4:13 pm
- Contact:
Re: Direct Storage Access FC - slow
I still think there is something odd happening, but yeah, guess it is not clear what is causing it. I will post if I ever find out what!
-
- Expert
- Posts: 179
- Liked: 8 times
- Joined: Jul 02, 2013 7:48 pm
- Full Name: Koen Teugels
- Contact:
Re: Direct Storage Access FC - slow
1,2 GB/s is the 8 GB FC limit , correct? not sure id veeam pulls on multiple paths? Also the 3PAR gives priority to radom IO.
rhe limit per vmdk is maybe a vmware issue,limitation, there are alot of software that have single stream limitations
rhe limit per vmdk is maybe a vmware issue,limitation, there are alot of software that have single stream limitations
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Direct Storage Access FC - slow
In theory, you could pull 2400MB/s, since 3par is active/active and you are using round robin at host level. But as you said, the Inform OS is acting to maintain overall performance for all VVs and hosts. I'll do a test on a full flash 8000 next month and I'll let you know.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Expert
- Posts: 179
- Liked: 8 times
- Joined: Jul 02, 2013 7:48 pm
- Full Name: Koen Teugels
- Contact:
Re: Direct Storage Access FC - slow
8000 is 16 gbit x4 , so source shouldn't be any issue
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Direct Storage Access FC - slow
I was more thinking about the per vmdk speed and not the overall speed
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Mar 30, 2016 12:05 pm
- Contact:
Re: Direct Storage Access FC - slow
HI,
i got the same Problem since we updated to v9.
V8 Backup Time for 6,3 TB data via FC - 5 hours
V9 Backup Time for 6,3 TB data via FC - 8 hours
i got the same Problem since we updated to v9.
V8 Backup Time for 6,3 TB data via FC - 5 hours
V9 Backup Time for 6,3 TB data via FC - 8 hours
-
- VP, Product Management
- Posts: 27371
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Direct Storage Access FC - slow
csteb, the OP is not discussing difference between v8 and v9. It seems like you're facing something different. Did you contact our support team with both job logs for review (from v8 and from v9)?
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Mar 30, 2016 12:05 pm
- Contact:
Re: Direct Storage Access FC - slow
That's correct, but the FC performance decreased with V9. There maybe relation.
Ive contacted the Support without success. The ticket is still open.
Ive contacted the Support without success. The ticket is still open.
-
- VP, Product Management
- Posts: 27371
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Direct Storage Access FC - slow
Please post your ID, so that we could always reference to it when discussing this internally.
-
- Enthusiast
- Posts: 47
- Liked: 1 time
- Joined: Nov 06, 2014 8:01 pm
- Full Name: Mikkel Nielsen
- Contact:
Re: Direct Storage Access FC - slow
My support case is still open, will post results when i have any.
Looking forward hearing from: emachabert on the 3par 8000 results single threaded.
I agree, that this seems like there is a max throughput pr. single treaded stream..
Looking forward hearing from: emachabert on the 3par 8000 results single threaded.
I agree, that this seems like there is a max throughput pr. single treaded stream..
Who is online
Users browsing this forum: Google [Bot] and 61 guests