Comprehensive data protection for all workloads
Post Reply
bg.ranken
Expert
Posts: 127
Liked: 22 times
Joined: Feb 18, 2015 8:13 pm
Full Name: Randall Kender
Contact:

Disabling "Decompress backup data blocks before storing"

Post by bg.ranken »

Hello,

We have a few backup copy jobs storing data in repositories that had the option "decompress backup data blocks before storing". Due to space limitations we would like to disable this setting and let the data be compressed again.

If we disable this setting on the repository with existing data from backup copy jobs is there any way to have it create a new full backup with the compression settings rather other than doing a reseed? If we adjust the setting so that the next backup copy cycle creates a GFS restore point will the new full backup file be compressed or will it still be decompressed?

I'm just trying to avoid having to reseed these backup files if possible.

Thanks!
veremin
Product Manager
Posts: 20413
Liked: 2301 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Disabling "Decompress backup data blocks before storing"

Post by veremin »

You can execute an active full backup for backup copy job by simply right clicking on it and selecting "Active full" option.

However, in your case active full backup is not needed, as soon as you disable the corresponding option ("decompress backup data blocks") new restore points will land on repository in compressed state.

Thanks.
bg.ranken
Expert
Posts: 127
Liked: 22 times
Joined: Feb 18, 2015 8:13 pm
Full Name: Randall Kender
Contact:

Re: Disabling "Decompress backup data blocks before storing"

Post by bg.ranken »

Yeah I was trying to not have to do an active full backup since it would take days to copy new information from our primary site, and we have other backups that need to get offsite.

I'm glad to hear that the restore points will be compressed. Though does that mean that the new synthetic fulls made after the creation of a GFS restore point are counted as new restore points and compressed?
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Disabling "Decompress backup data blocks before storing"

Post by Gostev »

Yes.
haslund
Veeam Software
Posts: 856
Liked: 154 times
Joined: Feb 16, 2012 7:35 am
Full Name: Rasmus Haslund
Location: Denmark
Contact:

Re: Disabling "Decompress backup data blocks before storing"

Post by haslund »

Just keep in mind, if there was a reason you had this enabled, that it could have certain impacts.
For example if you are storing these Backup Copy Jobs on a deduplication appliance and you enable compression, this could very negatively impact the deduplication ratio on storage.
Rasmus Haslund | Twitter: @haslund | Blog: https://rasmushaslund.com
bg.ranken
Expert
Posts: 127
Liked: 22 times
Joined: Feb 18, 2015 8:13 pm
Full Name: Randall Kender
Contact:

Re: Disabling "Decompress backup data blocks before storing"

Post by bg.ranken »

Thanks for the reminder Rasmus. We are in fact backing up to deduplicated repository, but it's with Windows deduplication, so we need to keep the file size under 4TB (Windows 2016 deduplicaiton won't dedup a file larger than that). If this were a perfect world I'd like to be able to tell the job to split the backup files into multiple 1TB or 2TB files and then I wouldn't have this problem, but until then I can just leave compression on to get the file under 4TB.

I know I had asked before to get something like splitting backup files implemented but I don't think enough people are using Windows based deduplicated repositories yet to warrant dedicating resources to work on it. I'm hoping with the Windows Server 2016 release and all the deduplication improvements more people will start using it. Then eventually splitting backup files can be added to the roadmap in Veeam. Even something like a per-VMDK option (similar to the per-vm option) would help in many of my cases.
Andanet
Veeam Legend
Posts: 41
Liked: 5 times
Joined: Jul 08, 2015 8:26 pm
Full Name: Antonio
Location: Italy
Contact:

Re: Disabling "Decompress backup data blocks before storing"

Post by Andanet »

Hi all,
after reading this thread and just to clarify best configuration for me.
We use 1 HPE StoreOnce with 4 Catalyst store. 1 proxy gateway for each store. 3 stores to backup job and 1 store for copy job.
Every backup job is configured
InlineDeduplicationFalse
Compression LevelDedupe Friendly
Storage Optimized KbBlockSize4096

I think is correct to remove check to this setting "Decompress backup data blocks before storing".
Right?
Only for copy job or both kind of jobs?
Thanks all.
Antonio
Antonio aka Andanet D'Andrea
Backup System Engineer Senior at Sorint.lab ¦ VMCE2021-VMCA2022 | VEEAM Legends 2023 | VEEAM VUG Italian Leader ¦
Mike Resseler
Product Manager
Posts: 8191
Liked: 1322 times
Joined: Feb 08, 2013 3:08 pm
Full Name: Mike Resseler
Location: Belgium
Contact:

Re: Disabling "Decompress backup data blocks before storing"

Post by Mike Resseler »

Randall,

Are you user per-VM backup chains? (Explained very well by Luca here: http://www.virtualtothecore.com/en/veea ... up-chains/)

This is a very interesting option to use in combination with Windows Dedupe

Antonio,

I would advise you to leave that check enabled. Compression reduces the efficiency of deduplication so it is a good option to enable this option. The decompression will give you more deduplication from your StoreOnce

Thanks
Mike
Post Reply

Who is online

Users browsing this forum: Google [Bot] and 132 guests