Comprehensive data protection for all workloads
Post Reply
Ikes73
Novice
Posts: 9
Liked: never
Joined: Jul 03, 2013 3:36 pm
Contact:

Optimizing backups for dedup

Post by Ikes73 »

Hi,

in our first setup we have split up our backup in mulitple jobs. One job does all the OS-disk from all our VM's --> this gives a very high dedup rate.
Another job contains all our fileserver and archive data disks.
This setup however gives a problem when using surebackup. The automated process doesn't detect the multiple vdk's in different jobs, resulting in not being able to boot a 'restored' VM (which is an issue when for example using surebackup on an SQL server).

So it seems that the only way to configure Veeam jobs is putting your VM's completely in one job.
I think that a 1TB fileshare disk does not really delvers dedup results next to the OS disk. So you would like to see all your OS disks together in one job, meaning : put ALL your VM's in one job.
Having approx 80 VM's ... this gets really big (30TB) meaning creating very large LUNs, having very large vbk's , etc ... . I'm quiet sure this could give some other performance issues.

Are there any guidelines/best practices for optimizing our should we just do this without thinking too much. Create jobs you fill up with VM's without minding max dedup. Split up in multiple jobs only to limit vbk and LUN sizes.

Tx.
Vitaliy S.
VP, Product Management
Posts: 27055
Liked: 2710 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Optimizing backups for dedup

Post by Vitaliy S. »

Hello,

Yes, you're right - SureBackup job will not detect other disks required to boot these VMs, the same refers to Instant VM recovery feature. Putting all VMs into a single job will create a single VBK file that might be hard to manage because of its size. So I would suggest grouping VMs based on the OS version, for example, place all Windows 2008R2 VMs in one job, and repeat this step for every other OS. Btw, how many OS versions do you have in our environment? Also can you please tell me what is your repository? Using a dedupe appliance or Windows Server 2012 with deduplication enabled might be a good option for you if you want to get max deduplication out of your backup files.

Thank you!
Ikes73
Novice
Posts: 9
Liked: never
Joined: Jul 03, 2013 3:36 pm
Contact:

Re: Optimizing backups for dedup

Post by Ikes73 »

Hi,
tx for the quick and clear response.
Almost all of our VM's are W2k8R2, just a few 2003 , 2012 and Unix distris.
For now our repository is on SAN but we are expecting our new secondary storage soon. In the run for this secondary storage we had a dedup appliance but he didn't make it. I am considering W2012 dedup (I don't trust it for long term but it should be possible since we are going to export to tape after some time :-) ).
Are there best practice limits for vbk and LUN sizing ?
We noticed that changing our setup after a first try had some 'unwanted' side effects, so trying to start on a good base is most advisable (off course).

Kind regards!
yizhar
Service Provider
Posts: 181
Liked: 48 times
Joined: Sep 03, 2012 5:28 am
Full Name: Yizhar Hurwitz
Contact:

Re: Optimizing backups for dedup

Post by yizhar » 1 person likes this post

Hi.

Your problem is "priority".
You are too focused on dedup rates, instead of more important backup and recovery goals.
Dedup rates should be your last concern, after management of backups and restores, performance, stability, portability, and much more.

I suggest that you create several jobs with 5-20 VMs in each, depending on what's inside the VM.
For large file/mail servers with more then 500gb - I suggest single VM per job.
For small VMs (DC, TS, etc...) - several VMs per job.
Do not separate VM disks between jobs - to avoid problems not only with surebackup but also for restore perspective and management.
Do not combine too many VMs, nor too much GB in single job.

Yes - at the end you will get lower dedup rates, but the difference won't be that big, I think less then 10% difference.

Again - I suggest that at first you just try to forget about dedup rates, and plan the jobs accounting to other factors.
Then - add dedup considerations but as last priority and not first.

Yizhar
Ikes73
Novice
Posts: 9
Liked: never
Joined: Jul 03, 2013 3:36 pm
Contact:

Re: Optimizing backups for dedup

Post by Ikes73 »

Hi Yizhar ,

that is also a very clear statement and I totally agree.
At the end, dedup should be seen as feature, after the rest.

Thanks both for the respons ! I've got something to work on now !
Vitaliy S.
VP, Product Management
Posts: 27055
Liked: 2710 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Optimizing backups for dedup

Post by Vitaliy S. »

Yizhar is spot on! Backups and recovery goals should be the driving force. I would say 5 TB size of the VBK file should be fine. How do you present your SAN to the repository server? via iSCSI? If this is the case, then NTFS formatted LUN will give you up to 16TB target (NTFS limit).
Post Reply

Who is online

Users browsing this forum: Baidu [Spider] and 186 guests