Comprehensive data protection for all workloads
Post Reply
Darkzadow
Influencer
Posts: 13
Liked: 2 times
Joined: Mar 06, 2023 3:53 pm
Full Name: Brandon Halloran
Contact:

Repository Design

Post by Darkzadow »

Hello,

I have a new storage server I can setup from mostly the ground up. An HPE storeeasy 1660 with 28x 18TB drives, windows 2019 os, 128GB ram. This is meant to be the new primary backup repository. Secondary repository is an off site deduplicating NTFS NAS, and to Azure blob storage for immutability. Once this is setup i can repurpose my current server as local offline repository.

My environment is 130 vm servers with total of 15TB. So the calculator with my backup settings gives me 103 TB with REFS and 436 TB without REFS. I have heard of fragmentation issues with REFS.

I have no familiarity with storage settings and the internet is basically giving me the pick whats best for your environment.

Since this is dedicated entirely to Veeam storage I was thinking single storage pool / logical disk consisting of Raid 60 on 26 of the drives and set the other 2 to hot spare. Giving me 360 TB usable storage. Then split into 6x 60TB volumes. Setting 1-2 of these volumes as REFS for the under 30 day restore points to take advantage of fast clone, and the rest of the volumes as NTFS with dedup for my longer term restore points.
Backup settings is daily Forward incremental with weekly synthetic and monthly active fulls.

Is this a decent setup? Is splitting the server into multiple smaller pools better? I have read that dedup for primary repository is bad which is why I was thinking of trying REFS.
Thank you,
Darkzadow
tyler.jurgens
Veeam Legend
Posts: 290
Liked: 128 times
Joined: Apr 11, 2023 1:18 pm
Full Name: Tyler Jurgens
Contact:

Re: Repository Design

Post by tyler.jurgens »

I would not use NTFS for any of the repos.

What's your goal around splitting up into different volumes?

You can achieve your goals with one large repository using ReFS. That said, I'd use XFS instead if you are comfortable going the Linux route.

Point your jobs at that one large ReFS/XFS repository and set your retention points as required. With the block cloning you get from ReFS or XFS, keeping weekly/monthly/yearly backups becomes negligible usage. Splitting your repos up will hinder that, and you'll likely get more savings from ReFS/XFS than you would from dedupe on NTFS. If you go the XFS route you could also include immutability on your backups (which often helps with any audits or insurance requirements).
Tyler Jurgens
Veeam Legend x2 | vExpert ** | VMCE | VCP 2020 | Tanzu Vanguard | VUG Canada Leader | VMUG Calgary Leader
Blog: https://explosive.cloud
Twitter: @Tyler_Jurgens BlueSky: @tylerjurgens.bsky.social
Darkzadow
Influencer
Posts: 13
Liked: 2 times
Joined: Mar 06, 2023 3:53 pm
Full Name: Brandon Halloran
Contact:

Re: Repository Design

Post by Darkzadow »

The goal of splitting into different volumes was to take advantage of NTFS deduplication which requires <64TB volumes, and to give me storage space for when I am asked to store something other than Veeam backups (because even though I was told this is the Veeam only box, I can see my future where I am asked to store other data until more storage is approved, and NTFS is the format acceptable for the rest of the environment). Mainly as REFS fragmentation has been a question that has come up when looking at REFS.

Linux is being considered, but not an option until security gives me the go ahead
MarkBoothmaa
Veeam Legend
Posts: 181
Liked: 49 times
Joined: Mar 22, 2017 11:10 am
Full Name: Mark Boothman
Location: Darlington, United Kingdom
Contact:

Re: Repository Design

Post by MarkBoothmaa »

I would strongly advise against enabling de-dup on any backup repository volumes. When the disk optimization task runs it can cause havoc and file locking.
Darkzadow
Influencer
Posts: 13
Liked: 2 times
Joined: Mar 06, 2023 3:53 pm
Full Name: Brandon Halloran
Contact:

Re: Repository Design

Post by Darkzadow »

Thank you for the knowledge. I will use more space for REFS. Any issues with fragmentation ?
MarkBoothmaa
Veeam Legend
Posts: 181
Liked: 49 times
Joined: Mar 22, 2017 11:10 am
Full Name: Mark Boothman
Location: Darlington, United Kingdom
Contact:

Re: Repository Design

Post by MarkBoothmaa »

I've not experienced any fragmentation issues yet. We have also disabled the optimize disks service to prevent scheduled defrags running on our extents.
tyler.jurgens
Veeam Legend
Posts: 290
Liked: 128 times
Joined: Apr 11, 2023 1:18 pm
Full Name: Tyler Jurgens
Contact:

Re: Repository Design

Post by tyler.jurgens »

With ReFS, it's expected to get fragmented due to block cloning, and defragmenting the disk will cause you more pain than it helps. It's not like NTFS or FAT file systems where fragmented file systems are punishing.

ReFS or XFS gives you the same space savings as the best dedupe engines, without needing to dedupe anything. It's like dedupe without the compute cost of performing dedupe operations.
Tyler Jurgens
Veeam Legend x2 | vExpert ** | VMCE | VCP 2020 | Tanzu Vanguard | VUG Canada Leader | VMUG Calgary Leader
Blog: https://explosive.cloud
Twitter: @Tyler_Jurgens BlueSky: @tylerjurgens.bsky.social
Post Reply

Who is online

Users browsing this forum: Google [Bot], Semrush [Bot] and 107 guests