Comprehensive data protection for all workloads
Post Reply
jandrewartha
Enthusiast
Posts: 34
Liked: 6 times
Joined: Feb 13, 2017 1:49 am
Contact:

Storage spaces or S2D on 2016 for archive backup repository

Post by jandrewartha »

So I've built a backup server to be a copy target and store monthly archives for 12mo+. I currently have 12x8TB WD Red Pros and 2x 400GB Intel 750 NVMe SSDs, and the case has room for 36 3.5" HDDs in total. So now I come to the question of how to provision the storage. ReFS of course, but then do I use Storage Spaces or Storage Spaces Direct (S2D)? SS is simple, but if I do tiering it doesn't support parity, only mirroring, and retiers once a day via a scheduled task. S2D is more complex and technically needs multiple nodes, but I found a blog posts that suggests you can set it up with one node by setting

Code: Select all

-FaultDomainAwarenessDefault PhysicalDisk
when creating a pool. Licensing isn't an issue, we get datacentre included as part of our education site license. Thoughts?
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by Gostev »

S2D, now doubt - much more feature rich than SS already, and Microsoft keeps adding more cool features there! While SS is considered a legacy tech.
jandrewartha
Enthusiast
Posts: 34
Liked: 6 times
Joined: Feb 13, 2017 1:49 am
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by jandrewartha »

Hi Gostev, that doesn't surprise me, but am I going to shoot myself in the foot trying to do S2D on a single node?
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by tsightler »

I personally would not build and store my backups on a non-supported configuration if I really cared about them. While there are hackish ways to get S2D on a single node, they are not supported. I've seen tons of problems even with 2 node S2D and wouldn't really recommend less than a 3 node configuration. Also, S2D requires 2016 Datacenter, while regular storage spaces only requires standard edition, so if you're looking at single node I believe your only option is really legacy storage spaces.

Biggest negative to simple storage spaces is write performance but you can compensate for this somewhat by allocating a large write cache when you create the virtual disk. I believe the maximum size is 100GB (default is something like 1GB for mirrored and 4GB for parity) and I think this is the best use case for NVMe drives rather than tiering, however, I'm not sure that will be workable Intel 750's as those are not high endurance SSDs and write cache will likely burn them out quickly.
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by dellock6 »

I wonder why you want to use storage spaces, and not rely on the hardware raid controller of the server? Yes, you lose self-healing of corrupted blocks, but the performance penalty for SS is significant in any environment I've seen. And even on a simple volume, you still get all the benefits like blockclone and integrity streams.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
jandrewartha
Enthusiast
Posts: 34
Liked: 6 times
Joined: Feb 13, 2017 1:49 am
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by jandrewartha »

I was originally speccing the hardware for ZFS, including an IT-mode SAS controller, and saw S2D as the Windows equivalent with the block-clone benefits of ReFS being better than ZFS dedupe, but I didn't realise the limitations until now that I've installed it. For now I've set up a mirrored, tiered environment.

Is there a quick reference for the best practices for repository and job settings on ReFS? I've found plenty of blog posts but it'd be nice if there was a central wiki or something.
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by dellock6 » 1 person likes this post

Hi,
we are trying to consolidate all the information coming from the threads we have here into a single document, I agree that especially the 37 pages of "refs 4k horror stories" is hard to read from start to end. It may take some time though.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
jandrewartha
Enthusiast
Posts: 34
Liked: 6 times
Joined: Feb 13, 2017 1:49 am
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by jandrewartha »

Hi Luca,

That's good to hear. I think your blog post https://www.virtualtothecore.com/en/an- ... dows-2016/ was one of the ones I read when speccing this server months ago, it must have been before you added the caveat about tiering requiring S2D.
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by dellock6 »

yeah, it's a subtle difference, it works even on single servers, but it's not supported by Microsoft.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
nismoau
Novice
Posts: 8
Liked: never
Joined: Jul 06, 2016 1:29 am
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by nismoau »

Our experiences trialing Storage Spaces from a backup repository running WS 2012R2 and an old FC SAN attached has been brilliant. The SAN has a mish mash of disk types and speeds, and we've thrown them together on the SAN-side in RAID groups, grouped by common disk type, size and speed. Then, on the WS side, presented the locally separate disks as one single volume using Storage Spaces, and set up faster RAID groups with the 15k SAS disks as the 'SSD tier'. We get great performance out of the unit for both backups and restores, and the obvious bonus of the larger aggregated single volume to house backups.

In my opinion, this option is far better than Veeam's built-in Scale-out repository feature, also.
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by dellock6 » 1 person likes this post

Just as a note, Veeam Scale-out backup repository has a different goal than SS, S2D or any other scale-out storage system, think about it as more a logical aggregation/federation of multiple repositories. There are advantages like single pointer for backup targets (even when you change extents) and evacuation options (useful for migration of backups). You can even think about having multiple SS volumes be part of the same SOBR group.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
eschek
Novice
Posts: 4
Liked: never
Joined: Dec 23, 2014 11:32 am
Full Name: Stephan Liebner
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by eschek »

jandrewartha wrote:So I've built a backup server to be a copy target and store monthly archives for 12mo+. I currently have 12x8TB WD Red Pros and 2x 400GB Intel 750 NVMe SSDs, and the case has room for 36 3.5" HDDs in total. So now I come to the question of how to provision the storage. ReFS of course, but then do I use Storage Spaces or Storage Spaces Direct (S2D)? SS is simple, but if I do tiering it doesn't support parity, only mirroring, and retiers once a day via a scheduled task. S2D is more complex and technically needs multiple nodes, but I found a blog posts that suggests you can set it up with one node by setting

Code: Select all

-FaultDomainAwarenessDefault PhysicalDisk
when creating a pool. Licensing isn't an issue, we get datacentre included as part of our education site license. Thoughts?
Hi,
you can configure tiering with parity and it is supported (i had opened a microsoft ticket in the past to got a confirmation)! I have done this with windows server 2016 (no more scheduled tasks for tiering). You need to do that with powershell and not with the GUI like this:

create a pool with all available disks:
New-StoragePool -FriendlyName "<poolname>" -PhysicalDisks (Get-PhysicalDisk -CanPool $true) -StorageSubSystemFriendlyName (Get-StorageSubSystem).FriendlyName

create two tiers with interleave size 512kb:
New-StorageTier -StoragePoolFriendlyName <poolname> -FriendlyName HDD_Tier -MediaType HDD -ResiliencySettingName Parity -Interleave 524288
New-StorageTier -StoragePoolFriendlyName <poolname> -FriendlyName SSD_Tier -MediaType SSD -ResiliencySettingName Mirror -Interleave 524288

get the max size of both tiers in terabyte:
$hdd_maxsize=(Get-StorageTierSupportedSize -FriendlyName HDD_Tier -ResiliencySettingName Parity | ft @{l="TierSizeMax(TB)";e={$_.TierSizeMax/1TB}}).TierSizeMax
$ssd_maxsize=(Get-StorageTierSupportedSize -FriendlyName SSD_Tier -ResiliencySettingName Mirror| ft @{l="TierSizeMax(TB)";e={$_.TierSizeMax/1TB}}).TierSizeMax

create the virtual disk / virtual volume with the max tier sizes:
New-Volume -StoragePoolFriendlyName "<poolname>" -FriendlyName Volume1 -FileSystem ReFS -StorageTierFriendlyName SSD_Tier, HDD_Tier -StorageTierSizes <max ssd tier size>TB,<max hdd tier size>TB -AllocationUnitSize 64KB -DriveLetter E

More information to interleave sizes:
http://www.dell.com/support/manuals/de/ ... lang=en-us
https://social.technet.microsoft.com/wi ... mance.aspx
tranquilnet
Service Provider
Posts: 25
Liked: 1 time
Joined: Mar 23, 2017 11:10 pm
Full Name: Tranquilnet IT Solutions
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by tranquilnet »

Thank you all for creating this post. I reviewed all responses and referenced articles.

Assuming a 36 x 3.5 inch drive bay + 2 x 2.5 inch drive bay server chassis, is it correct that the suggested/supported SINGLE server solution is:

- 2 x CPU
- 64GB Memory
- Windows 2016 Server Standard
- 2 x SSD Hardware RAID1 OS Partition (NTFS)
- 36 x HDD Hardware RAID10 DATA Partition (ReFS)

And the "better" MULTI-SERVER solution is:

- 2 x CPU
- 64GB Memory
- Windows 2016 Server Datacenter
- 6 x SSD storage tier presented to the OS as individual disks (S2D)
- 30 x HDD storage tier presented to the OS as individual disks (S2D)

Lastly, w/ ReFS, what is the recommended configured block size?

Thanks,

-W
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by dellock6 »

Go for 64k and NEVER EVER use 4k :)
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
nmdange
Veteran
Posts: 528
Liked: 144 times
Joined: Aug 20, 2015 9:30 pm
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by nmdange »

With 36-bays, I probably have the exact same physical server you are thinking of purchasing :) I use RAID 50, not RAID 10. RAID 10 wastes too much space imo. A good hardware RAID controller will still get you good performance with RAID 50 or RAID 60. I also have a 45-bay JBOD attached to the raid controller for additional space.

One suggestion, given how cheap RAM is these days, I would go for 128GB or 256GB of RAM. With 2 CPUs, you'd want at least 4 DIMMs, and it's more cost effective to buy 32GB DIMMs. ReFS on 2016 also really likes RAM :)

Also, if you are thinking of doing S2D, go with a minimum of 4-nodes to get dual-parity. Supermicro offers a 36-bay chassis with 4 optional U.2 NVMe drives as well which is nice. Also important with S2D is the use of RDMA-capable 10gb+ NICs (Mellanox or Chelsio). There are additional settings on both the server and switches you have to configure (e.g. priority flow control). If you aren't sure about configuring RDMA correctly, then I would stick with a single server with hardware RAID.
aceit
Enthusiast
Posts: 31
Liked: 14 times
Joined: Jun 20, 2017 3:17 pm
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by aceit »

Hello,
I'm thinking... with so many disks is it worth, cost-wise, to go to storage spaces and handling such problem in the windows stack instead of buying a respectable external array with built-in virtualization and just present simple LUNs to the backup servers?
you have also the flexibility to use such provisioned space from multiple servers in case, simple hand offs in case of faults etc. I don't know if the cost of such space provisioned via direct attached disks is so economic, especially considering all the variables (flexibility, the external space "survives" the lifecycle of the server itself etc.etc.etc.), and especially if you have already a fabric in place.
just a consideration regarding the tradeoffs of the solution...
Anguel
Expert
Posts: 194
Liked: 18 times
Joined: Apr 16, 2015 9:01 am
Location: Germany / Bulgaria
Contact:

Re: Storage spaces or S2D on 2016 for archive backup reposit

Post by Anguel »

For my small business I plan to put 4 SATA drives into a single Dell Poweredge T30 with Windows Server 2016 and to use ReFS with a Storage Spaces Classic mirror. This is going to be my main Veeam repository.
Now I read that Storage Spaces Classic is slow with ReFS, especially on writes, however I did not find any numbers on how slow it actually is. Thanks in advance for any hints.
Post Reply

Who is online

Users browsing this forum: Bing [Bot], NightBird and 114 guests