Discussions specific to the VMware vSphere hypervisor
Post Reply
aman4God
Influencer
Posts: 22
Liked: 2 times
Joined: Feb 17, 2015 4:34 pm
Full Name: Stanton Cole
Contact:

NDB faster than Direct SAN transport mode

Post by aman4God » Sep 06, 2019 9:35 pm

There are several posts on the forum about SAN transport not being faster that NDB mode. These were all old posts and I haven't seen anything recently about this. We have a new Veeam implemenation and part of the architecture design was to use physical servers for repos and proxies. We would zone and map all the datastores to the physical proxies and use them to enable SAN transport mode across the SAN Fabric. We set all this up and tested this functionality out on both our Pure Storage array and the Nimble HF-60 hybrid array. In all tests the NBD mode destroyed the SAN transport mode in performance. With Direct Storage mode we got between 250 and 450MB/s processing speed and with Automatic (NBD) mode we got between 1 and 2GB/s. This isn't bad, but based on the Veeam architecture recommendations we would expect that the physical proxies would be much faster than the NDB mode. We are using 8GB FC everywhere and UCS infrastructure with (4)10GB FEXs this was all detailed in the architecture so I am trying to figure out if there is a misconfiguration in the physical servers, FC or SAN? Is this expected, not expected... I just wanted to know if anyone else is doing this and their experiences with it.

PetrM
Veeam Software
Posts: 59
Liked: 8 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr
Contact:

Re: NDB faster than Direct SAN transport mode

Post by PetrM » Sep 09, 2019 12:17 pm

Hi Stanton!

Can I ask you to clarify if you see this pattern for backup or restore tasks?

In some cases SAN mode might be slower when you restore virtual disks in lazy zeroed format because of proxy-Vcenter round trips through the disk manager APIs (AllocateBlock and ClearLazyZero requests). I'd suggest to test a restore speed using eager zeroed format: space required for the virtual disk is allocated at creation time so there is no need to use disk manager API during write to a disk. You can specify eager zeroed type at this step of restore wizard.

If you're talking about backup performance, then this is not normal. I'd suggest to check HBA driver version and make sure that:
1) You compare processing rates of full runs in SAN and NBD modes
2) The bottleneck is shown as "Source" in job window statistics for both modes

Thanks!

aman4God
Influencer
Posts: 22
Liked: 2 times
Joined: Feb 17, 2015 4:34 pm
Full Name: Stanton Cole
Contact:

Re: NDB faster than Direct SAN transport mode

Post by aman4God » Sep 10, 2019 2:48 pm

I haven't done restore tests yet, but that is a good observation. We will definitely do that. Everything is thin in our VM environment as that is the best practices recommended for the storage providers (Nimble & Pure). The thin volumes could be the issue for the performance degradation. I know that there is a warning about Direct Storage transport with thin volumes, but my understanding is that it would still work, but need to be restored as thick or could be instant restored and then vmotioned into thin from the backup array.

The source in all test situations is UCS so would I need to check the driver versions of the vHBAs? The physical HBAs are using the recommended latest driver as suggested by Nimble Storage related to the latest NWT software. The bottleneck in all test instances indicated the source.

PetrM
Veeam Software
Posts: 59
Liked: 8 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr
Contact:

Re: NDB faster than Direct SAN transport mode

Post by PetrM » Sep 11, 2019 3:46 pm

Hi Stanton!

Thanks for reply!

I would exclude Nimble or Pure storage arrays from the list of potential root causes due to the fact that you're getting low processing rate values for both of these storages in SAN mode.
Any chance to share a connectivity schema on which it's possible to see all components involved in read process: proxy, UCS infrastructure, storage arrays, switches etc. ?

Thanks!

tsightler
VP, Product Management
Posts: 5399
Liked: 2228 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: NDB faster than Direct SAN transport mode

Post by tsightler » Sep 12, 2019 2:18 am 1 person likes this post

Can you share more details about the test you are running? In general, both NBD and SAN mode are capable of very high total throughput, however, SAN mode is generally faster "per-disk" than NBD mode. Also, are you comparing Backup from Storage Snapshot or just Direct SAN transport? How VMs and how many disks are in your tests? Are they always full backups? What's the total number of VMs, disks, and size? What are the bottleneck stats for each run?

I've been involved with another customer with a similar architecture (Pure source storage + Flashblade target), and they are seeing >4GB/s with Direct SAN, when backing up a couple dozen disks in parallel. They could probably get the same performance with NBD, but would likely need many more parallel tasks. Where Direct SAN typically shines is when you have a single large disk because it's so much faster "per-disk" rate. That being said, your results seem atypical to me.

Post Reply

Who is online

Users browsing this forum: No registered users and 9 guests