Dear everyone,
Enterprise storage comes often with deduplication and compression functionality.
Beside that there is NTFS and ReFS deduplication and compression functionality.
On top of that there is Veeam with its deduplication and compression functionality.
On which levels should the functionality be enabled and how much does it help reducing the data usage?
What is the impact on restore time? How likely is data corruption by using all these kind of techniques (at the same time?)?
Thank you very much.
Mark
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Aug 14, 2018 11:49 am
- Full Name: Forum user
- Contact:
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: SAN, filesystem and application level deduplication
Hi Mark, everything depends on the particular storage, since recommended settings might vary between different makes and models. General recommendation for a dedupe target is to enable Veeam B&R deduplication, since it doesn't affect hardware dedupe rate much due to using large block size, as well as set Veeam B&R compression level to dedupe-friendly (speaks for itself).
As for the restore time, then it is directly affected by the time required to rehydrate/decompress data, that is why it is typically recommended to have RAW primary storage for fast operational restores and use deduplication devices as secondary storage.
As for the restore time, then it is directly affected by the time required to rehydrate/decompress data, that is why it is typically recommended to have RAW primary storage for fast operational restores and use deduplication devices as secondary storage.
-
- Expert
- Posts: 206
- Liked: 41 times
- Joined: Nov 01, 2017 8:52 pm
- Full Name: blake dufour
- Contact:
Re: SAN, filesystem and application level deduplication
exagrid has a landing zone - where the most recent restore points are stored straight to disk without dedup. outside of that - provision more space on your san so u can have replicas of vms onsite to protect at the server level. also, replicate off site so u can protect your environment at the data center level. there really is no replacement for having replicas ready to go. we use our dedup appliance for long term retention. but in most cases restoring a backup from dedup will be painful, especially if we are talking terabytes - management should be aware of this as well.
Who is online
Users browsing this forum: AndyCH, Google [Bot], renatorichina and 187 guests