-
- Influencer
- Posts: 24
- Liked: never
- Joined: May 29, 2017 5:13 am
- Full Name: MS Sunil
- Contact:
How De-duplication and Compression works
Hi
We are new to Veeam, just completed the implementation and started taking HV VM backups (on-host backups are configured). Looking to get some clarity on how de-duplication & compression works on VM backup.
Consider one server with 40 GB used space disk, scheduled Active Full backup on every Saturday, Synthetic full on every Wednesday and incremental on weekdays with 60 restore points. As per Veeam documents, source side de-duplication ensures only unique data blocks not already present in the previous restore point are transferred across the network and target side de-duplication checks the received blocks against other virtual machine (VM) blocks already stored in the backup file. Here when we check backup file size, it's around 22 GB for all weekly full jobs. What we expect is since one full job data is already present in SAN, other full jobs shouldn't transfer and store full data again since we have multiple restore points. This causes issues with SAN space utilization and we changed many jobs to run as Forever forward incremental jobs, but this is not accepted by our organization. Just we want to know is the VBR works as expected or do we to make some changes here with the schedule.
One more point needs clarification. At present we configured scheduled jobs for each individual VMs. But if we create a single job with multiple VMs with similar retention and job schedule, will this improve de-duplication ratio (Target-side deduplication checks the received blocks against other virtual machine (VM) blocks already stored in the backup file, thus providing global deduplication across all VMs included in the backup job), even we tried this for a few VMs, but still found it creates multiple files for each VMs included in the job.
Looking for clarification and best practice to follow
Thanks in advance
We are new to Veeam, just completed the implementation and started taking HV VM backups (on-host backups are configured). Looking to get some clarity on how de-duplication & compression works on VM backup.
Consider one server with 40 GB used space disk, scheduled Active Full backup on every Saturday, Synthetic full on every Wednesday and incremental on weekdays with 60 restore points. As per Veeam documents, source side de-duplication ensures only unique data blocks not already present in the previous restore point are transferred across the network and target side de-duplication checks the received blocks against other virtual machine (VM) blocks already stored in the backup file. Here when we check backup file size, it's around 22 GB for all weekly full jobs. What we expect is since one full job data is already present in SAN, other full jobs shouldn't transfer and store full data again since we have multiple restore points. This causes issues with SAN space utilization and we changed many jobs to run as Forever forward incremental jobs, but this is not accepted by our organization. Just we want to know is the VBR works as expected or do we to make some changes here with the schedule.
One more point needs clarification. At present we configured scheduled jobs for each individual VMs. But if we create a single job with multiple VMs with similar retention and job schedule, will this improve de-duplication ratio (Target-side deduplication checks the received blocks against other virtual machine (VM) blocks already stored in the backup file, thus providing global deduplication across all VMs included in the backup job), even we tried this for a few VMs, but still found it creates multiple files for each VMs included in the job.
Looking for clarification and best practice to follow
Thanks in advance
-
- Product Manager
- Posts: 8191
- Liked: 1322 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: How De-duplication and Compression works
Hi,
Furst: Welcome to the forums.
I am not sure if I understand everything what you are asking, but I will give it a try. Feel free to tell me that I am wrong
1. Our deduplication only works per job. Which means one VM per job will not give you that much profit.
2. When you have moved multiple VM's in one single job, was per-VM backup files enabled? Because if it does, you can't take advantage of multiple VM's in one single backup file (see here: https://helpcenter.veeam.com/docs/backu ... tml?ver=95)
Cheers
Mike
Furst: Welcome to the forums.
I am not sure if I understand everything what you are asking, but I will give it a try. Feel free to tell me that I am wrong
1. Our deduplication only works per job. Which means one VM per job will not give you that much profit.
2. When you have moved multiple VM's in one single job, was per-VM backup files enabled? Because if it does, you can't take advantage of multiple VM's in one single backup file (see here: https://helpcenter.veeam.com/docs/backu ... tml?ver=95)
Cheers
Mike
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: How De-duplication and Compression works
Moreover, Veeam B&R deduplication works within a backup file, so all full backups for the given job will have comparable size, since data is not deduplicated between them.
-
- Influencer
- Posts: 24
- Liked: never
- Joined: May 29, 2017 5:13 am
- Full Name: MS Sunil
- Contact:
Re: How De-duplication and Compression works
Thanks a lot you all. I will explain the current configuration in detail and what I understood from your reply.
We are using only Hyper-V 2012 R2 environment, having single physical VBR 9.5 server, using on-host backups, using in-line de-duplication.
We were confused on Veeam statement "source side de-duplication ensures only unique data blocks not already present in the previous restore point are transferred across the network and target side de-duplication checks the received blocks against other virtual machine (VM) blocks already stored in the backup file", also Mike mentioned "Our deduplication only works per job" and Foggy mentioned "deduplication works within a backup file". So what I understood now on statement "source side de-duplication ensures only unique data blocks not already present in the previous restore point are transferred across the network" is, if data blocks are there with previous restore points then it won't be transferred over network, for eg say, a full job won't transfer the full data if the data blocks present with previous full backup, but the size of full backup file will be same or more as old full job which already there due to multiple restore points. Am I right?
(We thought if a full backup is there, later on wards while take full backups the file size would be less compared to initial full backup because de duplication and now I understood how de duplication works)
As per best practice recommendation "per-VM backup files enabled" is configured, I read the link you provided. So should we disable this option for better de-duplication and configure jobs with multiple VMs? If we configure backups with multiple VMs with single file, then how many VMs should be selected per job and how can we calculate the number of VMs to be processed at a time since we are using on-host backup.
Thanks a lot
We are using only Hyper-V 2012 R2 environment, having single physical VBR 9.5 server, using on-host backups, using in-line de-duplication.
We were confused on Veeam statement "source side de-duplication ensures only unique data blocks not already present in the previous restore point are transferred across the network and target side de-duplication checks the received blocks against other virtual machine (VM) blocks already stored in the backup file", also Mike mentioned "Our deduplication only works per job" and Foggy mentioned "deduplication works within a backup file". So what I understood now on statement "source side de-duplication ensures only unique data blocks not already present in the previous restore point are transferred across the network" is, if data blocks are there with previous restore points then it won't be transferred over network, for eg say, a full job won't transfer the full data if the data blocks present with previous full backup, but the size of full backup file will be same or more as old full job which already there due to multiple restore points. Am I right?
(We thought if a full backup is there, later on wards while take full backups the file size would be less compared to initial full backup because de duplication and now I understood how de duplication works)
As per best practice recommendation "per-VM backup files enabled" is configured, I read the link you provided. So should we disable this option for better de-duplication and configure jobs with multiple VMs? If we configure backups with multiple VMs with single file, then how many VMs should be selected per job and how can we calculate the number of VMs to be processed at a time since we are using on-host backup.
Thanks a lot
-
- Product Manager
- Posts: 8191
- Liked: 1322 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: How De-duplication and Compression works
It looks like you are correct. The source-side dedup will lower the traffic across the network, target-side is responsible for saving storage.
The deduplication is indeed per backup-file. I apologize, still an old habit of saying it like that but per-VM backup files indeed makes multiple backup files per job and the deduplication will be per file.
Per-VM backup files has advantages, one of them (as an example) being that is great when you use Windows Server 2k16 Deduplication on your repository. But if your storage is not a dedupe appliance or does not run software dedup then it might be more interesting to run a few jobs with multiple VM's in there.
The deduplication is indeed per backup-file. I apologize, still an old habit of saying it like that but per-VM backup files indeed makes multiple backup files per job and the deduplication will be per file.
Per-VM backup files has advantages, one of them (as an example) being that is great when you use Windows Server 2k16 Deduplication on your repository. But if your storage is not a dedupe appliance or does not run software dedup then it might be more interesting to run a few jobs with multiple VM's in there.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: How De-duplication and Compression works
Full backup resets the backup chain and is a self-contained backup file, where data is not deduped against previous restore points (only within the processed VM disk).LMS wrote:So what I understood now on statement "source side de-duplication ensures only unique data blocks not already present in the previous restore point are transferred across the network" is, if data blocks are there with previous restore points then it won't be transferred over network, for eg say, a full job won't transfer the full data if the data blocks present with previous full backup, but the size of full backup file will be same or more as old full job which already there due to multiple restore points. Am I right?
-
- Influencer
- Posts: 24
- Liked: never
- Joined: May 29, 2017 5:13 am
- Full Name: MS Sunil
- Contact:
Re: How De-duplication and Compression works
Thank You all
We tried both options with the repository (Per-VM backup files enabled and disabled) against a set of VMs, but it didn't make any difference in size. So we will go with Per-VM backup files option.
We tried both options with the repository (Per-VM backup files enabled and disabled) against a set of VMs, but it didn't make any difference in size. So we will go with Per-VM backup files option.
-
- Enthusiast
- Posts: 61
- Liked: 8 times
- Joined: Mar 29, 2016 4:22 pm
- Full Name: sg_sc
- Contact:
Re: How De-duplication and Compression works
Full backup files (VBK) will always take up the full space (unless ReFS), no matter if previous full backup files are still present.
Veeam does the magic on the source side, using changed blocks technology (or Hyper-V equivalent) to only transfer changed blocks and on the target side the in backup-file deduplication will save storage space when you have multiple VM's with same blocks of data. For instance 10 Windows 2012 R2 VM's will definitely have a lot of blocks containing OS files in common, that will be deduped in the backup file.
If you enable per-VM backup files you do not have that last benefit, also if you create a job per VM you do not have that benefit.
If you want huge space savings without the need for special deduplication processes or appliances, you should look into ReFS 3.1 64K and Synthetic fulls.
As a test I have 9TB of backup copies (GFS: Q, M, W synthetic full VBK files) on a 2TB disk, thanks to the ReFS and Veeam fast block clone magic.
It does have a downside that ReFS needs a beefy server (lots of RAM) if you intend to put allot of TB's on it, and remember it must be ReFS 3.1 (win 2016) and use 64K blocksize otherwise things will not go smooth.
Veeam does the magic on the source side, using changed blocks technology (or Hyper-V equivalent) to only transfer changed blocks and on the target side the in backup-file deduplication will save storage space when you have multiple VM's with same blocks of data. For instance 10 Windows 2012 R2 VM's will definitely have a lot of blocks containing OS files in common, that will be deduped in the backup file.
If you enable per-VM backup files you do not have that last benefit, also if you create a job per VM you do not have that benefit.
If you want huge space savings without the need for special deduplication processes or appliances, you should look into ReFS 3.1 64K and Synthetic fulls.
As a test I have 9TB of backup copies (GFS: Q, M, W synthetic full VBK files) on a 2TB disk, thanks to the ReFS and Veeam fast block clone magic.
It does have a downside that ReFS needs a beefy server (lots of RAM) if you intend to put allot of TB's on it, and remember it must be ReFS 3.1 (win 2016) and use 64K blocksize otherwise things will not go smooth.
-
- Influencer
- Posts: 24
- Liked: never
- Joined: May 29, 2017 5:13 am
- Full Name: MS Sunil
- Contact:
Re: How De-duplication and Compression works
Thanks sg.
As I mentioned before we created a job which includes 4 VMs with & without Per-VM backup files option. But it didn't save a bit when compare each options. All the forum & Veeam documentation mentioned to disable Per-VM backup files to better de-dup, so we will open a case to check this
Regards
As I mentioned before we created a job which includes 4 VMs with & without Per-VM backup files option. But it didn't save a bit when compare each options. All the forum & Veeam documentation mentioned to disable Per-VM backup files to better de-dup, so we will open a case to check this
Regards
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: How De-duplication and Compression works
What kind of VMs they are? VMs created from a single template would more likely get more blocks in common. Also, have you ensured per-VM option took effect (i.e. there were separate backup chains for each VM in the repository)? Since it takes effect only after active full backup, if the setting is changed for existing job.
-
- Veeam Software
- Posts: 243
- Liked: 64 times
- Joined: Aug 31, 2015 8:24 am
- Full Name: Bart Pellegrino
- Location: Netherlands
- Contact:
Re: How De-duplication and Compression works
Keep in mind that Deduplication often works best on (active) Full Backups.
Incremental backups use CBT and the changed blocks are, more often than not, unique blocks.
This changes when backing up a Fileserver or Mail server. Low values of dedupe can be achieved.
Incremental backups use CBT and the changed blocks are, more often than not, unique blocks.
This changes when backing up a Fileserver or Mail server. Low values of dedupe can be achieved.
Bart Pellegrino,
Adv. Technical Account Manager - EMEA &
FlexCredit Program Manager
Adv. Technical Account Manager - EMEA &
FlexCredit Program Manager
-
- Influencer
- Posts: 24
- Liked: never
- Joined: May 29, 2017 5:13 am
- Full Name: MS Sunil
- Contact:
Re: How De-duplication and Compression works
Hi
VMs are Windows 2012 R2 servers with SQL DBs (the 4 servers we tested backup are using shared disks / VMs in cluster), we tried both the options and with per VM option it's creating separate files for each VMs. All jobs are created freshly, means we tried only active full backups
VMs are Windows 2012 R2 servers with SQL DBs (the 4 servers we tested backup are using shared disks / VMs in cluster), we tried both the options and with per VM option it's creating separate files for each VMs. All jobs are created freshly, means we tried only active full backups
Who is online
Users browsing this forum: No registered users and 23 guests