We are new to Veeam, just completed the implementation and started taking HV VM backups (on-host backups are configured). Looking to get some clarity on how de-duplication & compression works on VM backup.
Consider one server with 40 GB used space disk, scheduled Active Full backup on every Saturday, Synthetic full on every Wednesday and incremental on weekdays with 60 restore points. As per Veeam documents, source side de-duplication ensures only unique data blocks not already present in the previous restore point are transferred across the network and target side de-duplication checks the received blocks against other virtual machine (VM) blocks already stored in the backup file. Here when we check backup file size, it's around 22 GB for all weekly full jobs. What we expect is since one full job data is already present in SAN, other full jobs shouldn't transfer and store full data again since we have multiple restore points. This causes issues with SAN space utilization and we changed many jobs to run as Forever forward incremental jobs, but this is not accepted by our organization. Just we want to know is the VBR works as expected or do we to make some changes here with the schedule.
One more point needs clarification. At present we configured scheduled jobs for each individual VMs. But if we create a single job with multiple VMs with similar retention and job schedule, will this improve de-duplication ratio (Target-side deduplication checks the received blocks against other virtual machine (VM) blocks already stored in the backup file, thus providing global deduplication across all VMs included in the backup job), even we tried this for a few VMs, but still found it creates multiple files for each VMs included in the job.
Looking for clarification and best practice to follow
Thanks in advance