Comprehensive data protection for all workloads
Post Reply
Richardrichard
Novice
Posts: 5
Liked: never
Joined: Mar 07, 2017 5:57 am
Full Name: Rich
Contact:

Repository 'tiering'

Post by Richardrichard »

Hi All,

I've been evaluating repository options for a size up to around 300TB, and am fairly convinced by ReFS at this point. I am looking at big Supermicro box, had considered storage spaces (not S2D), but expect I will get better performance from a hardware RAID setup.

Rather than just sticking a load of 10TB drives in a RAID60, I was considering a tiered approach with say ~60TB of RAID10 for ingestion, then moving data from the ingestion area into the RAID60 for long(er) term retention. I am concerned how this would play with ReFS block clone, I expect if I used a scheduled task to move these files I wouldn't get any benefit from ReFS. I understand clone wouldn't work between the two areas anyway, but I would like to use block clone as I copy additional jobs into this area. What's the best plan of attack - Backup job to RAID10 followed by BCJ to RAID60?

Any ideas or experience welcomed.
gollem
Enthusiast
Posts: 33
Liked: 7 times
Joined: Jun 16, 2012 7:26 pm
Full Name: Erik E.
Contact:

Re: Repository 'tiering'

Post by gollem »

I can recommand you to look into ZFS (by using either TrueNAS which has enterprise support options or FreeNAS if you can manage it on your own)

ZFS has the option of LOG and ZIL volumes which basically can do SSD caching for both storage and readings, besides the SSD cache it also does ARC caching which stores your read data in the servers ECC memory for ultra fast access.

Also have a look here: veeam-backup-replication-f2/refs-4k-hor ... 40629.html for some stories about ReFS
Richardrichard
Novice
Posts: 5
Liked: never
Joined: Mar 07, 2017 5:57 am
Full Name: Rich
Contact:

Re: Repository 'tiering'

Post by Richardrichard »

Thanks gollem

I've followed that thread with interest, and looks like they have resolved the issues now.

I don't believe I need an SSD cache, raid10 for ingestion should be sufficient as backups will be taken from Nimble/Netapp snapshots in the DR datacentre, rather than impacting production. We have limited budget for the proxy/repository and I'm not sure a sufficiently sized TrueNAS will fit in, equally one of my key drivers is simplicity so having yet another proprietary box to feed and water isn't brilliant. That said, I will take a look.
WimVD
Service Provider
Posts: 60
Liked: 19 times
Joined: Dec 23, 2014 4:04 pm
Contact:

Re: Repository 'tiering'

Post by WimVD »

My approach was using two physical HPE DL360 Gen9 servers redundantly connected to a D6020 enclosure.
Each DL360 accesses one half (drawer) of the enclosure with 12Gb SAS connections.
Each drawer has 4 SSD's and up to 31 nearline disks.

To soak up the writes I use HPE smartcache on the P441 controllers and configure the 4 SSD's as a write-back cache in RAID10.
The nearline disks are configured in RAID6.

The repositories are configured with ReFS 64K.
Performance is great, each repository easily writes a sustained 400MB+/sec.
And there's more performance left, I have not yet been able to saturate the storage.
The bottleneck is always source or network.

I have not encountered the issues in the ReFS thread but I'm proactively implementing the workaround with the latest patch and registry keys.
Richardrichard
Novice
Posts: 5
Liked: never
Joined: Mar 07, 2017 5:57 am
Full Name: Rich
Contact:

Re: Repository 'tiering'

Post by Richardrichard »

Interesting, if you don't mind me asking what sort of price point was that solution?
Post Reply

Who is online

Users browsing this forum: Semrush [Bot] and 139 guests