Comprehensive data protection for all workloads
Post Reply
taylorbox
Influencer
Posts: 19
Liked: 2 times
Joined: Nov 30, 2011 8:06 pm
Contact:

Backup Job Efficiency Versus Flexibility

Post by taylorbox »

We're using Veeam Backup and Replication 6.

I understand that grouping similar VM's into fewer jobs provides greater deduplication efficiency than separating such VM's into individual jobs.

However, separating VM's into individual jobs provides greater flexibility: Each VM's job can be configured with unique restore points, backup modes, and backup schedules.

Also, if I want to run a backup for just one VM, I can do this more easily when each VM has its own job. If many VM's are group into just one job, and I need to run a backup for a specific VM in that job during the day (such as in the event of an emergency), I would have to run the entire job backing up all of the job's VM's just to get the one backup for the VM that I need.

If I understand this Veeam B&R design correctly, than clearly Veeam B&R creates a tug-of-war between grouping VM's into a few or just one backup job for greater deduplication efficiency and easier job management versus separating VM's into numerous jobs for greater flexibility of job configuration.

This seems to be a design issue...or, a product limitation. The key problem is the lack of a "global," folder-based or volume-based deduplication engine in Veeam B&R. If we could simply set Veeam B&R to deduplicate all jobs stored in XYZ folder on the server...problem solved.

I would appreciate any comments from Veeam on this. Maybe I'm missing something important here?

(I have already read this forum's FAQs and other posts on this subject).

Thanks,
-Taylorbox
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Backup Job Efficiency Versus Flexibility

Post by tsightler »

Certainly your statement is correct, but just like almost everything in life, everything is a give-and-take, there's not perfect solution for everything. The concept behind Veeam as it is currently designed is that a single job should be represented on disk as a single set of files. You can take the files, copy them somewhere else, archive them to tape, etc, and they will always contain the VMs that were in that job. Veeam's deduplication was designed to primarily save the space of redundant data within a job, you're unlikely to find a whole lot of duplicated data between an Exchange server and a SQL server anyway. On the other hand a group of application servers running the same application, deployed from the same template, may be able to deduplicate to great effect.

If we simply created a "folder" where everything was deduplicated, then the backup files would no longer be self contained entities. It's also much more difficult to scale this pool from a performance perspective. You'd likely end up with multiple dedupe pools across multiple disks, and obviously that would still lead to less-than-optimal deduplication.

Note that I'm not saying that having a pool and deduplicating across them is not a good idea, simply that it is, as of today, not a part of the design of the product. You can do this today with a software deduplication option or a hardware dedupe appliance.
dellock6
VeeaMVP
Posts: 6165
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Backup Job Efficiency Versus Flexibility

Post by dellock6 »

Besides what Tom has told, if you are concerned about deduplication ratio more than other aspects of the backup operations, I would suggest to have a look at deduplication appliances,there are some that can take veeam backup and dedup from different backup sets.

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Post Reply

Who is online

Users browsing this forum: Baidu [Spider], Bing [Bot], Semrush [Bot] and 89 guests