Host-based backup of VMware vSphere VMs.
Post Reply
JosueM
Expert
Posts: 187
Liked: 12 times
Joined: Sep 01, 2012 2:53 pm
Full Name: Josue Maldonado
Contact:

Direct SAN vs hotadd performance case 01091515

Post by JosueM »

Good day everyone.

I've setup a physical server(Dell PowerEdge R320) to direct connect to the SAN (Dell MD3200i) and tested the performance. Since performance on direct SAN is poor I opened a ticket and the final recommendation of tech support enginner is to continue to used hotadd instead of direct SAN.

It is "normal" that direct SAN be slower that hotadd in some cases, like our case? should I keep trying to figure out what's the performance issue with direct SAN or doesn't worth it. I mean do the jobs gonna get a significant performance boost?

Thanks in advance.

Image
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Direct SAN vs hotadd performance case 01091515

Post by PTide »

Hi,

Please describe your environment a little bit - do you use virtual or physical proxy? If physical, please describe what kind of connection is utilized between proxy and production storage when using SAN mode.

Thank you.
JosueM
Expert
Posts: 187
Liked: 12 times
Joined: Sep 01, 2012 2:53 pm
Full Name: Josue Maldonado
Contact:

Re: Direct SAN vs hotadd performance case 01091515

Post by JosueM »

Hello PTide,

The proxy is the same physical Veeam Backup Server. I've attached a picture to describe as best as possible the environment. the entire storage network is isolated and its 1Gigabit

thanks

Image
Gostev
Chief Product Officer
Posts: 31812
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Direct SAN vs hotadd performance case 01091515

Post by Gostev »

JosueM wrote:It is "normal" that direct SAN be slower that hotadd in some cases, like our case?
It's not normal, but definitely not untypical - and can always be resolved if you are willing to put enough time into this doing things like updating firmware on all HBAs (or changing them to another ones), updating MPIO software (sometimes uninstalling one instead helps, ironically), checking your network setup etc.

If the above does not help (which it usually does), then the best will be to seek help from your SAN vendor directly, as most of their support cases are performance related, and they typically have a huge internal KB about fixing I/O performance. Many vendor have tools in place to monitor the I/O requests as they hit the storage, this alone will help to isolate the issue quickly.
skrause
Veteran
Posts: 487
Liked: 106 times
Joined: Dec 08, 2014 2:58 pm
Full Name: Steve Krause
Contact:

Re: Direct SAN vs hotadd performance case 01091515

Post by skrause »

Is your MTU size consistent through the whole stack? I experienced weird issues with performance on an MD3200i that we traced back to one adapter being set at 1500 instead of 9000.

In your testing, did you run an active full on the same VM in each mode? Your screenshot shows two different VMs of different sizes.

Also, what is your target storage? If you are pushing it at another device over the same iSCSI fabric I could see where you might be creating an additional bottleneck that is not in place when you use hot add because only your backup repository is going through the iSCSI fabric.
Steve Krause
Veeam Certified Architect
JosueM
Expert
Posts: 187
Liked: 12 times
Joined: Sep 01, 2012 2:53 pm
Full Name: Josue Maldonado
Contact:

Re: Direct SAN vs hotadd performance case 01091515

Post by JosueM »

Hello skrause, jumbo frame are enabled in disk array, switches and hosts. Only thing I see different is that in esxi host and disk array the packet size is 9000 and in the veeam server the value is 9014 it does not allow me to set manually it just have 3 options, disabled, 4088 Bytes and 9014 Bytes.

I'm sorry the snapshot it was a mistake I'm uploading the right snapshot from the same VM on direct san mode.
Image

The target storage is local sata drives array in the veeam server.
JosueM
Expert
Posts: 187
Liked: 12 times
Joined: Sep 01, 2012 2:53 pm
Full Name: Josue Maldonado
Contact:

Re: Direct SAN vs hotadd performance case 01091515

Post by JosueM »

Hello Gostev,

I'm also suspecting something in the disk array or iscsi network issue, but been unable to figure out so far. I tried both using native windows iscsi initiators and also installing Dell MD drivers and the behavior is the same. Also tried round robin and least queue depth iscsi session policy on the initiator and still the same.

Thanks.
skrause
Veteran
Posts: 487
Liked: 106 times
Joined: Dec 08, 2014 2:58 pm
Full Name: Steve Krause
Contact:

Re: Direct SAN vs hotadd performance case 01091515

Post by skrause » 1 person likes this post

I think, as Anton mentioned, this probably warrants a call to Dell support.

Or you could say the hell with it and just use hot add :)
Steve Krause
Veeam Certified Architect
JosueM
Expert
Posts: 187
Liked: 12 times
Joined: Sep 01, 2012 2:53 pm
Full Name: Josue Maldonado
Contact:

Re: Direct SAN vs hotadd performance case 01091515

Post by JosueM »

hahaha I bet the second option is the best. thanks for your help.
Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 57 guests