Deployment questions

Discussions specific to Microsoft Hyper-V hypervisor

Deployment questions

Veeam Logoby ChrisJ83Knights » Tue Jul 11, 2017 7:12 pm

Hi all

I am just in the process of configuring our Veeam installation and was wondering if people predominantly used SureBackup Jobs or if they only use SureBackup for critical VMs as they take longer than normal Backup Jobs?

Or is the time difference not actually that much and it is recommended to predominantly use SureBackup Jobs?

Should I also always enable the application-aware setting? Or should I run vssadmin list writers to see if anything is VSS-aware?

After reading up on de duplication it states to get better results group similar VMs, as an example this would apply to our RDSH VMs. Our initial storage is fast access and not deduplicated so should I just backup each RDSH VM on its own and then create a BackupCopy Job which groups the multiple RDSH VMs to de duplicating storage? Or should the VMs be grouped at in the SureBackup/BackupJob even though exist on separate hosts?

Is it also recommended to set the jobs to optimal for Compression and Dedupe?

If backing up to a Synology Rackstation that has Server 2012 R2 de duplication enabled over iSCSI should I enable the setting Decompress before storing to improve de duplication rates? Or is this trial and error? This will be for archive storage.

Any feedback appreciated.

Many thanks
Posts: 20
Liked: never
Joined: Tue Jan 10, 2017 3:06 pm
Full Name: Chris Johnson

Re: Deployment questions

Veeam Logoby Mike Resseler » Wed Jul 12, 2017 5:31 am


Not sure if I understand it completely but I will give it a try

Surebackup is used to verify your backups. I would advice you to create some of these jobs for your critical VMs (to test if they can be booted and with additional scripts to test the inside workload also). I would also advice some sort of a rotation for the non-critical VMs so you can at least test those at a regular interval also. So surebackup is NOT a backup job, it is a system to verify your VMs. (And can be used also as a system to test patching, updates or train new administrators or something like that)

With our application-aware setting, it is not only for VSS. Our AAP is more advanced. So if it is your own environment, I would certainly do so.

For deduplication: I actually would put them together in the backup job so you have deduplication per job on your fast storage to save on storage. If I understood correctly, your backup copy job will land the data on a duplicating storage so it is over there that you could benefit from per-VM depending on what you try to achieve

Optimal for compression and dedupe: It depends on where you are storing the data and what you want to achieve. If it is local disk, this will give you the best ratio between backup time and disk savings. But maybe you prefer storage savings over speed so you can choose high or extreme.

No need to decompress in that last scenario, but be aware that windows dedupe in 2012 R2 can cause issues with larger files, so in that cases make sure you do per-VM to keep the VBK files smaller (2016 dedupe is better, but still can have that large file issue so the advice remains)

Mike Resseler
Veeam Software
Posts: 4225
Liked: 471 times
Joined: Fri Feb 08, 2013 3:08 pm
Location: Belgium, the land of the fries, the beer, the chocolate and the diamonds...
Full Name: Mike Resseler

Return to Microsoft Hyper-V

Who is online

Users browsing this forum: No registered users and 6 guests