Greetings.
This question asks, does NBD behave different during restores when compared to the below experience with HotAdd proxy mode?
We use HotAdd Proxy mode exclusively, the main backup repository is StoreOnce Catalyst, and the storage array is Dell SC9000. Backup performance is fine.
If we restore to a Virtual Disk of type Thin, restore throughput regardless of repository type, is very slow. The Proxy which is performing the restore of disk type Thin starts experiencing latencies of 40-150 ms on its virtual disks. This is consistent with the write performance of thin disks in our environment when new blocks need allocating, and suggests a hotadd proxy can't do anything special when writing to a Thin disks.
If I restore to Virtual Disk of Type EagerZeroedThick, I have to wait for the host and storage array to pre-create the entire virtual disk before proxy begins writing data. This is consistent with creating a EagerZeroedThick disk in our environment.
The problem is when there is a large delta between the amount of data written to a disk vs it's size. For example, a disk with 400GB in use of 8 TB allocated will either take a very long time because of the latency issue with Thin disks, or the array will take a very long time to pre-create the 8TB of EagerZeroedThick before a single byte is restored.
Would restoring over NBD have the same limitations, all else being equal?
Thanks
-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: HotAdd vs. NBD restore performance
Hi Eugene!
At best, I'd expect that restore processing rate in NBD would be the same as it is in HotAdd mode.
In case of thin/thick lazy disks, new blocks allocation and zeroing-out is required regardless of transport mode being in use and data flow goes over network link between proxy and vCenter.
This is usually slower than write to a disk directly attached to proxy.
Eager disk creation time doesn't depend on selected transport mode, it should be the same in all modes.
On the other hand, it make sense to test NBD mode in your particular environment and to compare restore performance with HotAdd.
Sometimes testing in specific environment can give us results distinct from those of expected in theory.
Thanks!
At best, I'd expect that restore processing rate in NBD would be the same as it is in HotAdd mode.
In case of thin/thick lazy disks, new blocks allocation and zeroing-out is required regardless of transport mode being in use and data flow goes over network link between proxy and vCenter.
This is usually slower than write to a disk directly attached to proxy.
Eager disk creation time doesn't depend on selected transport mode, it should be the same in all modes.
On the other hand, it make sense to test NBD mode in your particular environment and to compare restore performance with HotAdd.
Sometimes testing in specific environment can give us results distinct from those of expected in theory.
Thanks!
-
- VP, Product Management
- Posts: 7080
- Liked: 1511 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: HotAdd vs. NBD restore performance
I think there is should be no big difference but potentially the NBD is slower by the throughput limitations given by VMware.
Let me add here that there is a restore method that would help you. "Quick Rollback" allow the restore of just the changes to an existing disk.
https://helpcenter.veeam.com/docs/backu ... l?ver=95u4
Let me add here that there is a restore method that would help you. "Quick Rollback" allow the restore of just the changes to an existing disk.
https://helpcenter.veeam.com/docs/backu ... l?ver=95u4
-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
Re: HotAdd vs. NBD restore performance
Thank you for the feedback, I will go back to Dell, I have heard on other communities that other VAAI-accelerated storage devices can pre-create many TB of EagerZeroedThick data within a matter of minutes and that this may be worth looking into as to why our SC9000 cannot.
Who is online
Users browsing this forum: joast, Semrush [Bot] and 74 guests