Direct SAN vs hotadd performance case 01091515

VMware specific discussions

Direct SAN vs hotadd performance case 01091515

Veeam Logoby JosueM » Tue Nov 17, 2015 4:08 pm

Good day everyone.

I've setup a physical server(Dell PowerEdge R320) to direct connect to the SAN (Dell MD3200i) and tested the performance. Since performance on direct SAN is poor I opened a ticket and the final recommendation of tech support enginner is to continue to used hotadd instead of direct SAN.

It is "normal" that direct SAN be slower that hotadd in some cases, like our case? should I keep trying to figure out what's the performance issue with direct SAN or doesn't worth it. I mean do the jobs gonna get a significant performance boost?

Thanks in advance.

Image
JosueM
Expert
 
Posts: 147
Liked: 9 times
Joined: Sat Sep 01, 2012 2:53 pm
Full Name: Josue Maldonado

Re: Direct SAN vs hotadd performance case 01091515

Veeam Logoby PTide » Tue Nov 17, 2015 4:17 pm

Hi,

Please describe your environment a little bit - do you use virtual or physical proxy? If physical, please describe what kind of connection is utilized between proxy and production storage when using SAN mode.

Thank you.
PTide
Veeam Software
 
Posts: 3022
Liked: 247 times
Joined: Tue May 19, 2015 1:46 pm

Re: Direct SAN vs hotadd performance case 01091515

Veeam Logoby JosueM » Tue Nov 17, 2015 5:15 pm

Hello PTide,

The proxy is the same physical Veeam Backup Server. I've attached a picture to describe as best as possible the environment. the entire storage network is isolated and its 1Gigabit

thanks

Image
JosueM
Expert
 
Posts: 147
Liked: 9 times
Joined: Sat Sep 01, 2012 2:53 pm
Full Name: Josue Maldonado

Re: Direct SAN vs hotadd performance case 01091515

Veeam Logoby Gostev » Tue Nov 17, 2015 8:54 pm

JosueM wrote:It is "normal" that direct SAN be slower that hotadd in some cases, like our case?

It's not normal, but definitely not untypical - and can always be resolved if you are willing to put enough time into this doing things like updating firmware on all HBAs (or changing them to another ones), updating MPIO software (sometimes uninstalling one instead helps, ironically), checking your network setup etc.

If the above does not help (which it usually does), then the best will be to seek help from your SAN vendor directly, as most of their support cases are performance related, and they typically have a huge internal KB about fixing I/O performance. Many vendor have tools in place to monitor the I/O requests as they hit the storage, this alone will help to isolate the issue quickly.
Gostev
Veeam Software
 
Posts: 21396
Liked: 2350 times
Joined: Sun Jan 01, 2006 1:01 am
Location: Baar, Switzerland

Re: Direct SAN vs hotadd performance case 01091515

Veeam Logoby skrause » Tue Nov 17, 2015 9:12 pm

Is your MTU size consistent through the whole stack? I experienced weird issues with performance on an MD3200i that we traced back to one adapter being set at 1500 instead of 9000.

In your testing, did you run an active full on the same VM in each mode? Your screenshot shows two different VMs of different sizes.

Also, what is your target storage? If you are pushing it at another device over the same iSCSI fabric I could see where you might be creating an additional bottleneck that is not in place when you use hot add because only your backup repository is going through the iSCSI fabric.
Steve Krause
Veeam Certified Architect
skrause
Expert
 
Posts: 296
Liked: 45 times
Joined: Mon Dec 08, 2014 2:58 pm
Full Name: Steve Krause

Re: Direct SAN vs hotadd performance case 01091515

Veeam Logoby JosueM » Tue Nov 17, 2015 10:20 pm

Hello skrause, jumbo frame are enabled in disk array, switches and hosts. Only thing I see different is that in esxi host and disk array the packet size is 9000 and in the veeam server the value is 9014 it does not allow me to set manually it just have 3 options, disabled, 4088 Bytes and 9014 Bytes.

I'm sorry the snapshot it was a mistake I'm uploading the right snapshot from the same VM on direct san mode.
Image

The target storage is local sata drives array in the veeam server.
JosueM
Expert
 
Posts: 147
Liked: 9 times
Joined: Sat Sep 01, 2012 2:53 pm
Full Name: Josue Maldonado

Re: Direct SAN vs hotadd performance case 01091515

Veeam Logoby JosueM » Tue Nov 17, 2015 10:32 pm

Hello Gostev,

I'm also suspecting something in the disk array or iscsi network issue, but been unable to figure out so far. I tried both using native windows iscsi initiators and also installing Dell MD drivers and the behavior is the same. Also tried round robin and least queue depth iscsi session policy on the initiator and still the same.

Thanks.
JosueM
Expert
 
Posts: 147
Liked: 9 times
Joined: Sat Sep 01, 2012 2:53 pm
Full Name: Josue Maldonado

Re: Direct SAN vs hotadd performance case 01091515

Veeam Logoby skrause » Wed Nov 18, 2015 2:11 pm 1 person likes this post

I think, as Anton mentioned, this probably warrants a call to Dell support.

Or you could say the hell with it and just use hot add :)
Steve Krause
Veeam Certified Architect
skrause
Expert
 
Posts: 296
Liked: 45 times
Joined: Mon Dec 08, 2014 2:58 pm
Full Name: Steve Krause

Re: Direct SAN vs hotadd performance case 01091515

Veeam Logoby JosueM » Wed Nov 18, 2015 4:23 pm

hahaha I bet the second option is the best. thanks for your help.
JosueM
Expert
 
Posts: 147
Liked: 9 times
Joined: Sat Sep 01, 2012 2:53 pm
Full Name: Josue Maldonado


Return to VMware vSphere



Who is online

Users browsing this forum: Google [Bot] and 16 guests