Long story short:
Planned is an LHR based SOFS for short-term backup retention (7-14D), consisting of 2x Dell PowerEdge R760xd2 disk-wise maxxed out . These servers can be equipped with up to 28x 3.5" hdd (12x front, 12x flex tray, 4x back). My plan was to configure the Raid as to what a few years ago was suggested/done with Cisco UCS / HPE Appollo 4200 et al -> in my case 12TB disks Raid60 -> 2x13 disks plus 2x global hot spare. That would result in ca. 240TiB usable disk space per server (480TiB for the SOFS).
Short story long:
I've been in a lengthy conversation with a Veeam-internal presales engineer with regards to our future repository design. Usage: vSphere (Pure FlashArray) and NAS (NetApp C30) backup. For "convenient-daily-restore-tasks" we already use local snapshots on both source storage systems (Pure Safemode snapshots on VMware datastores, NetApp Tamperproof Snapshots on NAS SVMs). The Veeam backup chain will be used in "not-so-convenient" situations, and of course for media-break and copying-away the data from the originating systems.
The consensus on the Veeam backup repository chain so far is: Live-Data -> BJ to on-prem LHR based short-term SOFS (2x Dell R760xd2) -> Immediate BCJ plus 14D capacity move to mid/long-term Veeam Data Cloud Vault.
The presales enginner told me that he tries to follow a bigger picture/idea that when using the LHR only as a short-term repo, it becomes "losable/trashable" as it only acts as a kind of "landing-zone/cache" for transfers (immediate BCJ and later capacity move) to mid/long-term repos - besides obviously being the source for short-term restores. It can always be rebuilt by the existing backup chains residing in the capacity repo, or by running a new full backup. I like this thinking tbh.
But his stance is, that going with a Raid60 in this scenario is not the right way - he strongly suggests Raid5. Maaaaaaybe Raid6 (in his voicing there would be even more "a"s). as Raid6 and Raid60 come with a heavy write amplification and would simply be overkill - the short-term SOFS servers are "losable/trashable"...
This is a stark contradiction to what was the main proposition a few years back when Cisco UCS / HPE Appollo 4200 et al were hype, alltough back then they were probably not only used for short-term retention so reliability was a key factor. My inner voice tells me that raid5 really is deprecated and raid6 shall be used. Bit-rot and recovery-failures are a thing... 24-28 disks are too much for a single disk group so a raid60 is the way to go?!? Even if the use-case predicts "losable/trashable"...
What is your take? Shoot =)
Thanks a lot!
-
- Enthusiast
- Posts: 53
- Liked: 6 times
- Joined: Feb 05, 2022 11:16 am
- Contact:
-
- Veeam Software
- Posts: 889
- Liked: 160 times
- Joined: Feb 16, 2012 7:35 am
- Full Name: Rasmus Haslund
- Location: Denmark
- Contact:
Re: Yet another Hardened Repo HW-Sizing Questionv
Rebuilding a RAID-5 based on 12 TB disks will take so long it is likely the stress could take out one or more disks in the process. I would strongly recommend going with your own suggestion of RAID-60.
Rasmus Haslund | Twitter: @haslund | Blog: https://rasmushaslund.com
Who is online
Users browsing this forum: Amazon [Bot] and 51 guests