Comprehensive data protection for all workloads
Post Reply
bonzovt
Enthusiast
Posts: 42
Liked: 1 time
Joined: Sep 09, 2016 6:15 pm
Full Name: Adam Fisher
Contact:

Storage Inline Dedupe vs. Veeam Dedupe

Post by bonzovt »

A question that has come up a lot recently on my end involves the settings that should be used when a backup repository resides on a SAN that is capable of doing inline dedupe and compression. Should this type of storage essentially be considered the same as a dedupe appliance, and from the Veeam side all dedupe turned off, and tell the repository to decompress data before it is written to disk? It seems like we are using backup repositories a lot now that aren't necessarily Data Domain, ExaGrid, etc. but are actually SAN arrays that have similar inline dedupe capabilities, so I was curious as to what the best option is in that scenario.

Also curious, in the case of HCI where there may be one overall storage container with global inline dedupe, if the the production VM and the backup repository happened to live in the same container (this is slightly theoretical :mrgreen: ) how big would the backup of a specific VM be? I realize this is mainly up to the storage and how it does dedupe, but has anyone seen this in action? Theoretically, the data for that VM is already written on disk for the production VM, so wouldn't the backup file be significantly smaller because of dedupe? Or does the process of Veeam backing up the VM and changing it from VMDK format to VBK format somehow change the blocks enough that the storage could still dedupe the data a bit, but you would really have two versions of "similar" data on disk, one for the prod VM and one for the backup file?
DaveWatkins
Veteran
Posts: 370
Liked: 97 times
Joined: Dec 13, 2015 11:33 pm
Contact:

Re: Storage Inline Dedupe vs. Veeam Dedupe

Post by DaveWatkins »

Assuming your SAN dedup is any good (and a lot of them really aren't) then yes, you'd let the SAN do it and not Veeam

Putting your actual backups on the same storage as your VM's would, in theory, give you a huge dedup rate, but you're getting that because you're then not actually storing a full backup of your VM. If you get a single block corrupted it could affect your VM and your backup because both refer to that block. Additionally if that storage fails, your backup and your live data is gone.
chjones
Expert
Posts: 117
Liked: 31 times
Joined: Oct 30, 2012 7:53 pm
Full Name: Chris Jones
Contact:

Re: Storage Inline Dedupe vs. Veeam Dedupe

Post by chjones »

If your storage can perform inline deduplication you should always allow the storage to do this.

Remember, Veeam dedupe only works inside each backup file in a backup chain. If you have this scenario: VBK > VIB > VIB > VBK, dedupe from a Veeam perspective only occurs between blocks inside each of those four files. So you get dedupe between blocks inside the first VBK, then again only on blocks inside the first VIB, and so on. Veeam does not dedupe between backup files.

Deduplication at the storage layer provides deduplication across ALL of the blocks in ALL of the Veeam backup files, so your dedupe rate will be higher (assuming there are blocks to dedupe between files, which there should be if you have multiple full backup VBK files.

You should also ensure the data is decompressed before writing to disk, as storage level dedupe cannot occur efficiently on compressed blocks (same as you do if Veeam is writing to a dedupe appliance).

I'd also warn against storing your Veeam backup files on the same array as your primary storage. You should try to have at least some level of hardware separation for your production and backup data. I understand this isn't always possible, and typically your production storage is the highest performing storage in your environment, so you'd want to use it for fast restores. In this case, I'd recommend using Backup Copy jobs to ensure you have a copy of the required restore points on different media to maximise your data protection.
Post Reply

Who is online

Users browsing this forum: No registered users and 91 guests