Host-based backup of Microsoft Hyper-V VMs.
Post Reply
graycj
Service Provider
Posts: 11
Liked: 1 time
Joined: Jan 07, 2010 10:57 am
Full Name: Carl Gray
Location: UK
Contact:

BCJ compression level, and Repository dedupe settings

Post by graycj »

Hello Veeam!

We're using a BCJ to backup to an offsite 2012 R2 server, which is slowley running out of space, and we're considering enabling deduplication on it.

I'm wondering what the implications will be for the current BCJ jobs and the backup files which are already in the repository if I make the following changes:

Change the compression level for the BCJ job from Auto to Dedupe-friendly.

Change (enable) the Repository Storage Compatibility settings to align backup file data blocks and decompress backup data blocks before storing.

I assume that the BCJ job will carry on as normal, and all existing backup files will remain as is, but future files will be more dedupe friendly with better storage efficiency?

Thanks for your help,
Carl
foggy
Veeam Software
Posts: 21070
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: BCJ compression level, and Repository dedupe settings

Post by foggy »

Carl, there's no sense in changing compression level in the job if you enable decompression on repository. However, just FYI, if you change compression level in the backup copy job (and keep repository settings intact), new incrementals will contain data blocks compressed according to the new setting, as you've correctly assumed. Full backup will contain a mix of blocks with different compression levels once the merge process occurs.
PTide
Product Manager
Posts: 6428
Liked: 729 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: BCJ compression level, and Repository dedupe settings

Post by PTide »

Hi,

May I ask you what is the source of your BCJ? Please keep in mind, that "Auto" compression level means that the BCJ inherits the compression level of the parent regular backup job. Under some circumstances (source contains backups with different compression ratios) with BCJ "Auto" compression option the amount of traffic between sites might be larger than if you used a strict-defined option. The better option over "dedupe-friendly" would be to use "optimal", because it can significantly reduce the amount of data to be transferred over network, while giving not that much of CPU overhead, compared to "dedupe-driendly". Of course, everything depends on your environment, so please refer to this explanation.

Thank you.
Post Reply

Who is online

Users browsing this forum: No registered users and 15 guests