Comprehensive data protection for all workloads
Post Reply
Posts: 2
Liked: 1 time
Joined: May 10, 2011 4:11 pm
Full Name: Chris

Long Time User - Best Practice Refresher

Post by pendragon8062 » Mar 15, 2017 3:37 am

Good evening,

We've been using Veeam since version 5 and have been through all the major versions more-or-less unscathed. In the beginning we were primarily using Synology repositories (first as CIFS/SMB, then as iSCSI). Now we use Windows repos (Dell RX20/X30) and have never been happier. Granular restore has saved us many a time and we've been thrilled with all the improvements since 2009. Most of our hiccups have been VMWare CBT related, not Veeam related. Most of our current environments are VMWare backed by hybrid storage (2-6 hosts).

As our backup jobs have gotten bigger, storage has gotten cheaper, and Veeam has added a lot of features, I wanted to get a quick check-in from the community on some of the areas we are re-visiting. I recognize some of the questions depend on our RTO/RPO so definite answers aren't possible.

1. For local windows repositories, Windows 2016- NTFS with dedupe or ReFS? If ReFS do you let storage spaces do the work or a hardware RAID card? We have a couple of ReFS repos on commodity hardware and have been very happy.. Any preference for a Linux distro that offers similar features/performance? Per-VM backup chains?

2. For short-term backup jobs where you are fielding the typical "Can you get the file from 1 day to 1 month ago?" how are you structuring the job (forward/reverse, synthetic, heal check, defrag, etc.)? We became very fond of reverse incremental due to their relative space savings and frequency of restores within 2 weeks, but it seems like that preference is disappearing as backup sets get bigger. Before backup-copy was introduced, we would keep a reverse incremental for a month and schedule separate fulls for longer-term archiving.

3. When you make the short-term/long-term job cut-off, how are you structuring the longer-term backup copy job as far as daily and GFS restore points? Do you use multiple backup-copy jobs for different intervals?

4. When going to Cloud Connect, how many "daily" restore points do you keep in addition to the specified weekly-monthly-quarterly-yearly? Any specific optimizations? We're primarily going to iLAND and keeping one week of dailies and the rest are GFS.

5. Although we don't usually have problems, we have some SQL-backed applications (visual foxpro front-end-no laughing please) that have a conniption every time the VM gets stunned for a backup. Even on NVMe flash storage with a couple of users, its still an issue. Are there specific SQL optimizations that can alleviate this? Aree there some instances where you just turn application-aware off for tempremental VMs?

6. How frequently do you backup critical VMs which business owners want to have "real-time" backups, typically because they've heard a Zerto spiel. Is every 15 minutes really feasible (subject to source and destination hardware of course)? My understanding of Zerto is that it's crash-consistent so it really doesn't offer the same thing.

7. Do you generally source your replicas from backups in order to avoid hitting production storage too often?


Product Manager
Posts: 5264
Liked: 459 times
Joined: May 19, 2015 1:46 pm

Re: Long Time User - Best Practice Refresher

Post by P.Tide » Mar 15, 2017 10:56 am


My two cents:

1) ReFS should give you a better performance on compact and merge operations, and give some space saving on synthetic fulls, however you should keep in mind that it seems to be not quite stable in certain conditions (check this thread).

5) Please check this KB for optimization steps


Veeam Software
Posts: 18278
Liked: 1564 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson

Re: Long Time User - Best Practice Refresher

Post by foggy » Mar 15, 2017 12:04 pm

2. Current recommendation is to use forever forward incremental, that is similarly space efficient as reverse incremental, but is less I/O intensive and doesn't require keeping snapshot open for the entire backup duration.
3. You can have GFS retention in a single backup copy job or I'm not getting your point here.
4. This kind of questions is primarily defined by RPO requirements adopted in your company.
6. Yes, entirely depends on the infrastructure, so you can test and find the shortest period you can make backups by gradually increasing the frequency. Check job bottleneck stats for possible improvements.
7. Makes sense, especially in case of remote replicas.

Post Reply

Who is online

Users browsing this forum: Google [Bot] and 16 guests