Repository 'tiering'

Availability for the Always-On Enterprise

Repository 'tiering'

Veeam Logoby Richardrichard » Fri Mar 17, 2017 1:43 pm

Hi All,

I've been evaluating repository options for a size up to around 300TB, and am fairly convinced by ReFS at this point. I am looking at big Supermicro box, had considered storage spaces (not S2D), but expect I will get better performance from a hardware RAID setup.

Rather than just sticking a load of 10TB drives in a RAID60, I was considering a tiered approach with say ~60TB of RAID10 for ingestion, then moving data from the ingestion area into the RAID60 for long(er) term retention. I am concerned how this would play with ReFS block clone, I expect if I used a scheduled task to move these files I wouldn't get any benefit from ReFS. I understand clone wouldn't work between the two areas anyway, but I would like to use block clone as I copy additional jobs into this area. What's the best plan of attack - Backup job to RAID10 followed by BCJ to RAID60?

Any ideas or experience welcomed.
Richardrichard
Novice
 
Posts: 5
Liked: never
Joined: Tue Mar 07, 2017 5:57 am
Full Name: Rich

Re: Repository 'tiering'

Veeam Logoby gollem » Fri Mar 17, 2017 1:50 pm

I can recommand you to look into ZFS (by using either TrueNAS which has enterprise support options or FreeNAS if you can manage it on your own)

ZFS has the option of LOG and ZIL volumes which basically can do SSD caching for both storage and readings, besides the SSD cache it also does ARC caching which stores your read data in the servers ECC memory for ultra fast access.

Also have a look here: veeam-backup-replication-f2/refs-4k-horror-story-t40629.html for some stories about ReFS
gollem
Influencer
 
Posts: 16
Liked: never
Joined: Sat Jun 16, 2012 7:26 pm
Full Name: Erik E.

Re: Repository 'tiering'

Veeam Logoby Richardrichard » Fri Mar 17, 2017 1:59 pm

Thanks gollem

I've followed that thread with interest, and looks like they have resolved the issues now.

I don't believe I need an SSD cache, raid10 for ingestion should be sufficient as backups will be taken from Nimble/Netapp snapshots in the DR datacentre, rather than impacting production. We have limited budget for the proxy/repository and I'm not sure a sufficiently sized TrueNAS will fit in, equally one of my key drivers is simplicity so having yet another proprietary box to feed and water isn't brilliant. That said, I will take a look.
Richardrichard
Novice
 
Posts: 5
Liked: never
Joined: Tue Mar 07, 2017 5:57 am
Full Name: Rich

Re: Repository 'tiering'

Veeam Logoby WimVD » Fri Mar 17, 2017 2:04 pm

My approach was using two physical HPE DL360 Gen9 servers redundantly connected to a D6020 enclosure.
Each DL360 accesses one half (drawer) of the enclosure with 12Gb SAS connections.
Each drawer has 4 SSD's and up to 31 nearline disks.

To soak up the writes I use HPE smartcache on the P441 controllers and configure the 4 SSD's as a write-back cache in RAID10.
The nearline disks are configured in RAID6.

The repositories are configured with ReFS 64K.
Performance is great, each repository easily writes a sustained 400MB+/sec.
And there's more performance left, I have not yet been able to saturate the storage.
The bottleneck is always source or network.

I have not encountered the issues in the ReFS thread but I'm proactively implementing the workaround with the latest patch and registry keys.
WimVD
Service Provider
 
Posts: 48
Liked: 10 times
Joined: Tue Dec 23, 2014 4:04 pm

Re: Repository 'tiering'

Veeam Logoby Richardrichard » Fri Mar 17, 2017 3:19 pm

Interesting, if you don't mind me asking what sort of price point was that solution?
Richardrichard
Novice
 
Posts: 5
Liked: never
Joined: Tue Mar 07, 2017 5:57 am
Full Name: Rich


Return to Veeam Backup & Replication



Who is online

Users browsing this forum: Bing [Bot], CarlMcDade, Google [Bot], macpt80, VegaSE, Yahoo [Bot] and 43 guests