Disabling "Decompress backup data blocks before storing"

Availability for the Always-On Enterprise

Disabling "Decompress backup data blocks before storing"

Veeam Logoby bg.ranken » Wed Aug 17, 2016 4:43 pm

Hello,

We have a few backup copy jobs storing data in repositories that had the option "decompress backup data blocks before storing". Due to space limitations we would like to disable this setting and let the data be compressed again.

If we disable this setting on the repository with existing data from backup copy jobs is there any way to have it create a new full backup with the compression settings rather other than doing a reseed? If we adjust the setting so that the next backup copy cycle creates a GFS restore point will the new full backup file be compressed or will it still be decompressed?

I'm just trying to avoid having to reseed these backup files if possible.

Thanks!
bg.ranken
Enthusiast
 
Posts: 56
Liked: 10 times
Joined: Wed Feb 18, 2015 8:13 pm
Full Name: Randall Kender

Re: Disabling "Decompress backup data blocks before storing"

Veeam Logoby v.Eremin » Wed Aug 17, 2016 6:44 pm

You can execute an active full backup for backup copy job by simply right clicking on it and selecting "Active full" option.

However, in your case active full backup is not needed, as soon as you disable the corresponding option ("decompress backup data blocks") new restore points will land on repository in compressed state.

Thanks.
v.Eremin
Veeam Software
 
Posts: 13701
Liked: 1020 times
Joined: Fri Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin

Re: Disabling "Decompress backup data blocks before storing"

Veeam Logoby bg.ranken » Wed Aug 17, 2016 9:22 pm

Yeah I was trying to not have to do an active full backup since it would take days to copy new information from our primary site, and we have other backups that need to get offsite.

I'm glad to hear that the restore points will be compressed. Though does that mean that the new synthetic fulls made after the creation of a GFS restore point are counted as new restore points and compressed?
bg.ranken
Enthusiast
 
Posts: 56
Liked: 10 times
Joined: Wed Feb 18, 2015 8:13 pm
Full Name: Randall Kender

Re: Disabling "Decompress backup data blocks before storing"

Veeam Logoby Gostev » Thu Aug 18, 2016 12:05 am

Yes.
Gostev
Veeam Software
 
Posts: 21603
Liked: 2405 times
Joined: Sun Jan 01, 2006 1:01 am
Location: Baar, Switzerland

Re: Disabling "Decompress backup data blocks before storing"

Veeam Logoby haslund » Thu Aug 18, 2016 8:18 am

Just keep in mind, if there was a reason you had this enabled, that it could have certain impacts.
For example if you are storing these Backup Copy Jobs on a deduplication appliance and you enable compression, this could very negatively impact the deduplication ratio on storage.
Rasmus Haslund
Principal Technologist, Global Education Services @ Veeam Software
Veeam Certified Architect #1 | Veeam Certified Trainer #4 [v7,v8,v9] | Veeam Certified Trainer Mentor #1
Twitter: @haslund
Blog: www.perfectcloud.org
haslund
Veeam Software
 
Posts: 279
Liked: 52 times
Joined: Thu Feb 16, 2012 7:35 am
Location: Denmark
Full Name: Rasmus Haslund

Re: Disabling "Decompress backup data blocks before storing"

Veeam Logoby bg.ranken » Fri Aug 26, 2016 4:51 pm

Thanks for the reminder Rasmus. We are in fact backing up to deduplicated repository, but it's with Windows deduplication, so we need to keep the file size under 4TB (Windows 2016 deduplicaiton won't dedup a file larger than that). If this were a perfect world I'd like to be able to tell the job to split the backup files into multiple 1TB or 2TB files and then I wouldn't have this problem, but until then I can just leave compression on to get the file under 4TB.

I know I had asked before to get something like splitting backup files implemented but I don't think enough people are using Windows based deduplicated repositories yet to warrant dedicating resources to work on it. I'm hoping with the Windows Server 2016 release and all the deduplication improvements more people will start using it. Then eventually splitting backup files can be added to the roadmap in Veeam. Even something like a per-VMDK option (similar to the per-vm option) would help in many of my cases.
bg.ranken
Enthusiast
 
Posts: 56
Liked: 10 times
Joined: Wed Feb 18, 2015 8:13 pm
Full Name: Randall Kender

Re: Disabling "Decompress backup data blocks before storing"

Veeam Logoby Andanet » Fri Oct 28, 2016 12:50 pm

Hi all,
after reading this thread and just to clarify best configuration for me.
We use 1 HPE StoreOnce with 4 Catalyst store. 1 proxy gateway for each store. 3 stores to backup job and 1 store for copy job.
Every backup job is configured
InlineDeduplicationFalse
Compression LevelDedupe Friendly
Storage Optimized KbBlockSize4096

I think is correct to remove check to this setting "Decompress backup data blocks before storing".
Right?
Only for copy job or both kind of jobs?
Thanks all.
Antonio
Andanet
Service Provider
 
Posts: 13
Liked: never
Joined: Wed Jul 08, 2015 8:26 pm
Full Name: Antonio

Re: Disabling "Decompress backup data blocks before storing"

Veeam Logoby Mike Resseler » Mon Oct 31, 2016 6:59 am

Randall,

Are you user per-VM backup chains? (Explained very well by Luca here: http://www.virtualtothecore.com/en/veea ... up-chains/)

This is a very interesting option to use in combination with Windows Dedupe

Antonio,

I would advise you to leave that check enabled. Compression reduces the efficiency of deduplication so it is a good option to enable this option. The decompression will give you more deduplication from your StoreOnce

Thanks
Mike
Mike Resseler
Veeam Software
 
Posts: 3381
Liked: 384 times
Joined: Fri Feb 08, 2013 3:08 pm
Location: Belgium, the land of the fries, the beer, the chocolate and the diamonds...
Full Name: Mike Resseler


Return to Veeam Backup & Replication



Who is online

Users browsing this forum: Bing [Bot] and 6 guests