Comprehensive data protection for all workloads
Post Reply
christophe.niel.AID
Service Provider
Posts: 11
Liked: never
Joined: Jul 25, 2019 1:11 pm
Full Name: Christophe Niel
Contact:

Windows 2019 REFS and Backup copy jobs

Post by christophe.niel.AID »

Hi
we are planning to remove old Storeonce from our infrastructure and are planning to use some large dedicated servers with windows volume, refs and ssd caching (costing about a 1/4 of the storeonce maintenance...)
our POC is doing fine, deduplication ratio is good, and performance are way better than storeonce for restauration, so far, so good. (

we currently have 2 site with production, each backup locally, stored capacity is around 700TB uncompressed/un-deduped with around 150TB of raw capacity and about half usage

we also need to add an offsite copy and we planned to use backup copy jobs, but we are wondering what is the "best practice" for the copy jobs, we were planning to use full repository copy using the "new" immediate copy, easier to manage as we are juggling 90+ backup job, but this is not set is stone, and maybe not a good idea


When we followed the existing litterature for Backup on ReFS with dedup, there are specific settings, like no "synthetic", only active full weekly, as well as no inline dedup, no compression, etc..., that's ok and seels to works fine on the POC.

as we need to retain 30 days online on both site, on the copy job this means no "full", only incremental, and a long 30-days chains is not ok (not sure if it's not ok, but our veeam trainer was insistent that you shouldn't do more than 14 incremental in a chain, and I've seen this limit often mentionned on forums)

is it okay, to enable the "Defragment and Compact full backup file" on the copy job ? as the article on ReFS "backup" is saying not to do it

or is there a better way to have the copy job exactly mirror the source repository? (maybe synchronise the data with windows copy instead of veeam backup copy?)

ideally we would have used a "Y" backup with 2 repository (as there are no issue with bandwidth), but this is not an option in Veeam B&R

the best practice article : https://bp.veeam.com/vbr/VBP/3_Build_st ... block.html
the guideline for the backup job with "non integrated" dedup : https://bp.veeam.com/vbr/VBP/3_Build_st ... ation.html

The testing on our test platform is slow with daily backup, we need to let it run a few weeks to see the behaviour of the various options, and need to be sure before setting things for production.
I'm open for any input, thought or advice and past experience on this subject. the BPs are not really explicit on the BCJ in these cases.

thanks in advance
Christophe
PetrM
Veeam Software
Posts: 3268
Liked: 528 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: Windows 2019 REFS and Backup copy jobs

Post by PetrM »

Hi Christopher,

I don't think there is a better way to mirror a source repository than backup copy job, its primary goal is to ensure 3-2-1 rule compliance. Also, it's not clear why 14 restore points is better than 30, probably there was some specific argumentation related to your particular case? Basically, the only factor I'd keep in mind is the capacity of the target storage. I wouldn't expect any issues if you have enough free space to store 30 points.

I'm not sure that compact job does really make sense if you enable REFS deduplication, it takes too much time and has lower efficiency in comparison to non-dedupe targets, I'd follow our best practices guide and would keep this option disabled.

Thanks!
christophe.niel.AID
Service Provider
Posts: 11
Liked: never
Joined: Jul 25, 2019 1:11 pm
Full Name: Christophe Niel
Contact:

Re: Windows 2019 REFS and Backup copy jobs

Post by christophe.niel.AID »

Thanks for your answer

we need 30 days of backup as per contract with our clients, my worry was with the length of the chain on the copy which is 30 days long (versus 7 days on the main job as it has weekly active full)
and we need to insure the availibility on these backups even in case a of datacenter loss (and being able to continue to do backup and restore during the disaster, never happened, but it's in the contract, and never say never)

Right now we are just testing if ReFS is working correctly and what type of dedup ratio and performance we are getting on a 'small' selection of live data, but the veeam repo design is not yet defined.

as we are looking into the subject, maybe using scaleout repositories is the solution.
this article https://helpcenter.veeam.com/docs/backu ... ml?ver=110 specifically states the use case I need :
"You seek to store data on several sites to ensure its safety in case of a disaster."
which is exactly what we want to do these copies
PetrM
Veeam Software
Posts: 3268
Liked: 528 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: Windows 2019 REFS and Backup copy jobs

Post by PetrM »

It's a good decision to test the scenario on a small amount of data before running it on production. The Capacity Tier is very useful in your case as well, just be aware of backup chain validation logic if you decide to leverage Move policy.

Thanks!
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Google [Bot], Semrush [Bot] and 70 guests