Comprehensive data protection for all workloads
Post Reply
veeeammeupscotty
Enthusiast
Posts: 33
Liked: 2 times
Joined: May 05, 2017 3:06 pm
Full Name: JP
Contact:

CBT vs Proprietary Filtering for Windows Deduplication

Post by veeeammeupscotty »

I understand that Windows server deduplication wreaks havoc on CBT due to the various optimization jobs that can cause a large amount of changed blocks even for an incremental backup, but I'm wondering if Veeam's "proprietary filtering" would allow for smaller incremental backups in this scenario. Even if so, I realize there are probably other drawbacks to not using CBT such as more disk I/O/, CPU load, longer backup times, etc, but even with fast clone, we keep running out of space due to massive CBT incrementals.
csydas
Expert
Posts: 193
Liked: 47 times
Joined: Jan 16, 2018 5:14 pm
Full Name: Harvey Carel
Contact:

Re: CBT vs Proprietary Filtering for Windows Deduplication

Post by csydas »

Hi JP,

Not really. The issue with Dedupe and big increments is the C and B in CBT, as you pointed out. Even the proprietary method is still just looking at the blocks, not the actual content within the data block.

That's the big trade off with dedupe on a VM - you can save space on the datastore, or you can save space on the backup storage. Avoiding CBT is just going to be longer reads as you have to read the whole disk without any benefit from - Veeam is still going to see changed blocks and move those blocks to the backup.

Unfortunately it's just something you need to plan for; treat dedupe VMs as highly transactional VMs and plan the backup space around it. For our fileserver, we ended up just doing periodic fulls because it math-ed out to be less space over all.
Post Reply

Who is online

Users browsing this forum: Google [Bot], Semrush [Bot] and 140 guests