emachabert wrote:Just follow best practices :
- Windows MPIO configured for 3ParVV
- Fillword set to 3 on Brocade 8Gb fabrics
- All VM disks are Eager thick zeroed
- VVs are thin provisoinned
emachabert wrote:This is a known issue with Brocade fabric @8gb/s.
FIllword should be set to 3 (if ARBF/ARBF fails, use IDLE/ARBF), if not you get bad_os error increasing continuously.
Beware, configuring the fillword will disable/enable the port, so do one port at a time with 5 min pause within each.
Regarding the eager thick zeroed, you should definitively look at the litterature about Thin on Thin, Thin on Thick, Thick on Thick and thick on thin
When dealing with a 3par, having hardware assisted thin provisionnig and global wide stripping, you should really consider using Thick on Thin (Eager Zeroed).
One Veeam value about using Thick VM disk is DirectSAN restore and CBT restore !! Think about it !
BrandonH wrote:Interesting, I'm seeing the same numbers as you (130-180MB/s). I'm using Brocade Condor3/16G (No Fillword), so that's not in my options.
I have two 7400's, 24 SSD, 148 FC, 60 NL.
I have two Proxies, HP DL380G9's (Dual 12 Core, 32G ram) with Brocade/QLogic 1862's with 16GFC/10G Ethernet.
I use Thin VV's with Thick Eager Zeroed.
My 3PAR's are seperated by roughly 66k of DWDM. Two 10g for Ethernet, two 10G FC (Roughly 500ns return). <--This is my only complicating factor
I back up to FC Storage, with a copy job that then moved the job to a StoreOnce appliance for longer retention.
All of my hosts reside on 1862 Brocade's as well, same 16g/10g setup. I use storage snapshots for my backups. My hosts are also running 5.5.
When speaking to support a year or so ago, I was told the speeds I'm getting are normal, that I wouldn't see any faster. I don't have any fabric errors or port errors, everything seems to be running very clean. An active full will net me 188 average. I also get 99% Source, 24% Proxy, 1% Network, 1% Target (That math seems a bit off too rofl).
I'm running Flash Caching, I was taking a large amount of hits. I tend to stay around 25-30% during my peek traffic times.
BrandonH wrote:We don't use AO. I sort by hand. 90% of my MV's reside on FC. NL drives we use for logging mostly. SSD's are for VDI Testing and flash cache. I have moved VVol's over to them with more or less the same results.
My proxies are E5v3 8C/16Thread (I had to look to remember).
m1kkel wrote:Yeah, i already have parallel streams, but my proxy is limited to 4 concurrent threads, since it is only a 4 core xeon.
So it seems like it is possible to push more, with more servers in the same job - but that's just weird that we cant deliver more than 200 MB pr. VMDK.
Also, i do not understand why i cant reach your speed on a single VM!!
Just did a directsan restore of the same VM, Restore Rate 175 MB / Sec, and:
17-03-2016 15:09:13 Restoring Hard disk 1 (95,0 GB): 49,0 GB restored at 132 MB/s
That seems slow, but i've read somewhere that theres an issue with that on 3par, and that i need to create a new lun, correct?
Users browsing this forum: No registered users and 1 guest