I've setup a physical server(Dell PowerEdge R320) to direct connect to the SAN (Dell MD3200i) and tested the performance. Since performance on direct SAN is poor I opened a ticket and the final recommendation of tech support enginner is to continue to used hotadd instead of direct SAN.
It is "normal" that direct SAN be slower that hotadd in some cases, like our case? should I keep trying to figure out what's the performance issue with direct SAN or doesn't worth it. I mean do the jobs gonna get a significant performance boost?
Please describe your environment a little bit - do you use virtual or physical proxy? If physical, please describe what kind of connection is utilized between proxy and production storage when using SAN mode.
The proxy is the same physical Veeam Backup Server. I've attached a picture to describe as best as possible the environment. the entire storage network is isolated and its 1Gigabit
JosueM wrote:It is "normal" that direct SAN be slower that hotadd in some cases, like our case?
It's not normal, but definitely not untypical - and can always be resolved if you are willing to put enough time into this doing things like updating firmware on all HBAs (or changing them to another ones), updating MPIO software (sometimes uninstalling one instead helps, ironically), checking your network setup etc.
If the above does not help (which it usually does), then the best will be to seek help from your SAN vendor directly, as most of their support cases are performance related, and they typically have a huge internal KB about fixing I/O performance. Many vendor have tools in place to monitor the I/O requests as they hit the storage, this alone will help to isolate the issue quickly.
Is your MTU size consistent through the whole stack? I experienced weird issues with performance on an MD3200i that we traced back to one adapter being set at 1500 instead of 9000.
In your testing, did you run an active full on the same VM in each mode? Your screenshot shows two different VMs of different sizes.
Also, what is your target storage? If you are pushing it at another device over the same iSCSI fabric I could see where you might be creating an additional bottleneck that is not in place when you use hot add because only your backup repository is going through the iSCSI fabric.
Hello skrause, jumbo frame are enabled in disk array, switches and hosts. Only thing I see different is that in esxi host and disk array the packet size is 9000 and in the veeam server the value is 9014 it does not allow me to set manually it just have 3 options, disabled, 4088 Bytes and 9014 Bytes.
I'm sorry the snapshot it was a mistake I'm uploading the right snapshot from the same VM on direct san mode.
The target storage is local sata drives array in the veeam server.
I'm also suspecting something in the disk array or iscsi network issue, but been unable to figure out so far. I tried both using native windows iscsi initiators and also installing Dell MD drivers and the behavior is the same. Also tried round robin and least queue depth iscsi session policy on the initiator and still the same.