-
- Enthusiast
- Posts: 25
- Liked: 4 times
- Joined: Feb 17, 2015 4:34 pm
- Full Name: Stanton Cole
- Contact:
NDB faster than Direct SAN transport mode
There are several posts on the forum about SAN transport not being faster that NDB mode. These were all old posts and I haven't seen anything recently about this. We have a new Veeam implemenation and part of the architecture design was to use physical servers for repos and proxies. We would zone and map all the datastores to the physical proxies and use them to enable SAN transport mode across the SAN Fabric. We set all this up and tested this functionality out on both our Pure Storage array and the Nimble HF-60 hybrid array. In all tests the NBD mode destroyed the SAN transport mode in performance. With Direct Storage mode we got between 250 and 450MB/s processing speed and with Automatic (NBD) mode we got between 1 and 2GB/s. This isn't bad, but based on the Veeam architecture recommendations we would expect that the physical proxies would be much faster than the NDB mode. We are using 8GB FC everywhere and UCS infrastructure with (4)10GB FEXs this was all detailed in the architecture so I am trying to figure out if there is a misconfiguration in the physical servers, FC or SAN? Is this expected, not expected... I just wanted to know if anyone else is doing this and their experiences with it.
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: NDB faster than Direct SAN transport mode
Hi Stanton!
Can I ask you to clarify if you see this pattern for backup or restore tasks?
In some cases SAN mode might be slower when you restore virtual disks in lazy zeroed format because of proxy-Vcenter round trips through the disk manager APIs (AllocateBlock and ClearLazyZero requests). I'd suggest to test a restore speed using eager zeroed format: space required for the virtual disk is allocated at creation time so there is no need to use disk manager API during write to a disk. You can specify eager zeroed type at this step of restore wizard.
If you're talking about backup performance, then this is not normal. I'd suggest to check HBA driver version and make sure that:
1) You compare processing rates of full runs in SAN and NBD modes
2) The bottleneck is shown as "Source" in job window statistics for both modes
Thanks!
Can I ask you to clarify if you see this pattern for backup or restore tasks?
In some cases SAN mode might be slower when you restore virtual disks in lazy zeroed format because of proxy-Vcenter round trips through the disk manager APIs (AllocateBlock and ClearLazyZero requests). I'd suggest to test a restore speed using eager zeroed format: space required for the virtual disk is allocated at creation time so there is no need to use disk manager API during write to a disk. You can specify eager zeroed type at this step of restore wizard.
If you're talking about backup performance, then this is not normal. I'd suggest to check HBA driver version and make sure that:
1) You compare processing rates of full runs in SAN and NBD modes
2) The bottleneck is shown as "Source" in job window statistics for both modes
Thanks!
-
- Enthusiast
- Posts: 25
- Liked: 4 times
- Joined: Feb 17, 2015 4:34 pm
- Full Name: Stanton Cole
- Contact:
Re: NDB faster than Direct SAN transport mode
I haven't done restore tests yet, but that is a good observation. We will definitely do that. Everything is thin in our VM environment as that is the best practices recommended for the storage providers (Nimble & Pure). The thin volumes could be the issue for the performance degradation. I know that there is a warning about Direct Storage transport with thin volumes, but my understanding is that it would still work, but need to be restored as thick or could be instant restored and then vmotioned into thin from the backup array.
The source in all test situations is UCS so would I need to check the driver versions of the vHBAs? The physical HBAs are using the recommended latest driver as suggested by Nimble Storage related to the latest NWT software. The bottleneck in all test instances indicated the source.
The source in all test situations is UCS so would I need to check the driver versions of the vHBAs? The physical HBAs are using the recommended latest driver as suggested by Nimble Storage related to the latest NWT software. The bottleneck in all test instances indicated the source.
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: NDB faster than Direct SAN transport mode
Hi Stanton!
Thanks for reply!
I would exclude Nimble or Pure storage arrays from the list of potential root causes due to the fact that you're getting low processing rate values for both of these storages in SAN mode.
Any chance to share a connectivity schema on which it's possible to see all components involved in read process: proxy, UCS infrastructure, storage arrays, switches etc. ?
Thanks!
Thanks for reply!
I would exclude Nimble or Pure storage arrays from the list of potential root causes due to the fact that you're getting low processing rate values for both of these storages in SAN mode.
Any chance to share a connectivity schema on which it's possible to see all components involved in read process: proxy, UCS infrastructure, storage arrays, switches etc. ?
Thanks!
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: NDB faster than Direct SAN transport mode
Can you share more details about the test you are running? In general, both NBD and SAN mode are capable of very high total throughput, however, SAN mode is generally faster "per-disk" than NBD mode. Also, are you comparing Backup from Storage Snapshot or just Direct SAN transport? How VMs and how many disks are in your tests? Are they always full backups? What's the total number of VMs, disks, and size? What are the bottleneck stats for each run?
I've been involved with another customer with a similar architecture (Pure source storage + Flashblade target), and they are seeing >4GB/s with Direct SAN, when backing up a couple dozen disks in parallel. They could probably get the same performance with NBD, but would likely need many more parallel tasks. Where Direct SAN typically shines is when you have a single large disk because it's so much faster "per-disk" rate. That being said, your results seem atypical to me.
I've been involved with another customer with a similar architecture (Pure source storage + Flashblade target), and they are seeing >4GB/s with Direct SAN, when backing up a couple dozen disks in parallel. They could probably get the same performance with NBD, but would likely need many more parallel tasks. Where Direct SAN typically shines is when you have a single large disk because it's so much faster "per-disk" rate. That being said, your results seem atypical to me.
-
- Enthusiast
- Posts: 25
- Liked: 4 times
- Joined: Feb 17, 2015 4:34 pm
- Full Name: Stanton Cole
- Contact:
Re: NDB faster than Direct SAN transport mode
We had a subset of 4 VMs that were filled for test purposed with a couple hundred GB of small files and the same of larger files. For the purposes of the test be configured a 10TB volume on both our Pure Storage M70 array and also our Nimble HF60 array and performed the tests with the same servers using just Direct SAN transport mode. Every test was a full backup each time. There were only 4 VMs each with 2 disks, but none larger than 250GB. The bottleneck for each job was the source, and when running the job the maximum processing throughput was 400-600Gb/s for the direct SAN transport mode. It didn't change much whether it was the Nimble or the Pure being tested. When doing the NBD transport we achieved a range between 1-2Gb/s for the same servers. This test was a while back and we have since only been using the NDB mode as it was sufficient for the backups we have in the system currently, but if we can figure out how to better take advantage of the SAN transport mode I would definitely like to do that.
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: NDB faster than Direct SAN transport mode
Hi Stanton,
I'd recommend to contact our support team in order to perform a detailed investigation of connectivity schema including all components involved in read process: proxy, UCS infrastructure, storage arrays and switches. They can also perform all necessary measurements of read processing rate in both transport modes by running sample tool which uses same API functions as Veeam Data Mover uses during read. Such test might help to isolate the issue.
Thanks!
I'd recommend to contact our support team in order to perform a detailed investigation of connectivity schema including all components involved in read process: proxy, UCS infrastructure, storage arrays and switches. They can also perform all necessary measurements of read processing rate in both transport modes by running sample tool which uses same API functions as Veeam Data Mover uses during read. Such test might help to isolate the issue.
Thanks!
Who is online
Users browsing this forum: No registered users and 50 guests