Comprehensive data protection for all workloads
hoFFy
Service Provider
Posts: 183
Liked: 40 times
Joined: Apr 27, 2012 1:10 pm
Full Name: Sebastian Hoffmann
Location: Germany / Lohne
Contact:

Re: VBR 9.5 - REFS

Post by hoFFy »

Mike Resseler wrote:Sebastian,

What do you mean exactly? You installed B&R on a 2012 R2 server and connect a Backup repository that is located on a 2016 server to it?
No, I installed B&R on a Server 2016 and also formatted a volume with ReFS so I'm using it as the backup repository already. I'm just waiting to upgrade to 9.5.
I'm aware that I eventually have to create a new repository to benefit from the new features, but that's no problem for me
VMCE 7 / 8 / 9, VCP-DC 5 / 5.5 / 6, MCITP:SA
Blog: machinewithoutbrain.de
andyg
Enthusiast
Posts: 58
Liked: 5 times
Joined: Apr 23, 2014 9:51 am
Full Name: Andy Goldschmidt
Contact:

Re: VBR 9.5 - REFS

Post by andyg » 1 person likes this post

It would be nice if Veeam themselves can provide us a step-by-step guide for moving from VBR 9.0 to 9.5 and ReFS. A best-practise guide to follow.

Steps should include creating the new ReFS repository, how to seed or copy existing backups, what tick boxes to enable (as new features don't auto enable things), do we need to then run full backup to use the ReFS etc etc....

Or if someone has been through all this pain, please share so we all benefit.
-= VMCE v9 certified =-
andyg
Enthusiast
Posts: 58
Liked: 5 times
Joined: Apr 23, 2014 9:51 am
Full Name: Andy Goldschmidt
Contact:

Re: VBR 9.5 - REFS

Post by andyg »

ashleyw wrote: So unless you have a 4 node hyper converged infrastructure running just for backups, the main benefit of using Re-FS on top of ZFS is that we get maximum performance and capacity on the spindles with the benefits of application aware de-dupe in Server 2016 that is coming in Veeam 9.5. End result is that our backup and management layer costs can be kept down to the best bang for the buck.
Wow, are you saying stick with ZFS and run ReFs ontop of it? Will that still apply with VBR 9.5, Win 2016 and storage spaces, or will you drop the ZFS altogether?
-= VMCE v9 certified =-
Skyview
Service Provider
Posts: 54
Liked: 13 times
Joined: Jan 10, 2012 8:53 pm
Contact:

Re: VBR 9.5 - REFS

Post by Skyview »

Just a quick question- is storage spaces required behind the REFS volume? Or can I use a SAN based lun and just format with REFS?

Does it have to be 2016 REFS? Or will 2012 work?
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: VBR 9.5 - REFS

Post by tsightler »

It has to be a ReFS volume formatted with Windows Server 2016 as older versions don't support the block clone API (you can't even upgrade ReFS volume created on older Windows versions). Storage spaces is not required, you can format any volume with ReFS and get the benefit of fast clone technology, however, storage spaces (specifically mirrored and parity spaces) provided some additional benefits regarding integrity streams, i.e they can not just detect, but also automatically heal corrupt data blocks.
ashleyw
Service Provider
Posts: 181
Liked: 30 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: VBR 9.5 - REFS

Post by ashleyw » 1 person likes this post

andyg wrote: Wow, are you saying stick with ZFS and run ReFs ontop of it? Will that still apply with VBR 9.5, Win 2016 and storage spaces, or will you drop the ZFS altogether?
yep, I'm saying if you want the biggest bang for the buck in terms of spindles and IO throughput on a single node hyperconverged node, you best best is to run OmniOS and then present a virtual block device via iscsi up to a windows 2016 VM and then format the iscsi block device in the Windows 2016 server as ReFS and make that the backup repository for Veeam... You could of course rush out and get 4 nodes to get similar performance but then you'll be up for more data centre licenses and you'll need more spindles and SSDs as well due to storage spaces architecture. Beauty is if you run everything virtualised you can always change the configuration later should you require, if things improve with storage spaces. ZFS is here to stay for us at least until something else can meet the reliability/performance and price point.
Also a very important point for anyone looking after development shops providing this type of service hoping that their MSDN licenses cover this sort of thing - unfortunately they don't - the MSDN licenses cover development and test machines only - backup and management licenses are line of business services so they need to be commercially licensed - if you are primarily an MSDN shop, make sure you get this right otherwise you could be burned on a software audit.
dellock6
Veeam Software
Posts: 6137
Liked: 1928 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: VBR 9.5 - REFS

Post by dellock6 »

I'm not discussing or judging any design choice, but the only thing I can note in this design Ashley is that with a single volume, there is no chance to enable ReFS Integrity streams. You are still relying on the raid protection offered by underlying ZFS tough, but to me honestly integrity streams and self-healing is a huge deal in the new ReFS.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
ashleyw
Service Provider
Posts: 181
Liked: 30 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: VBR 9.5 - REFS

Post by ashleyw »

Thanks Luca, to be honest zfs is well proven in terms of avoiding data corruption, from our experience anyway.
But that aside, vast majority of people here are going to be running a single node backup server, so given that someone has 24 or 36 slots on a single node, what are the recommendations to get best use of the spindles and high IOPs bearing in mind you can't stripe across storage pools within a single os instance and that dedicated spares are allocated on a per storage pool basis (in Microsoft world), or are you suggesting we should all move to multi node hyperconverged clusters just for the backup layer?
dellock6
Veeam Software
Posts: 6137
Liked: 1928 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: VBR 9.5 - REFS

Post by dellock6 » 1 person likes this post

No, I'm not suggesting S2D at all, unless Microsoft would change its licensing schema for it. Datacenter licensing makes the entire idea behind S2D un-usable as of today, as much as it seems really attractive from a design point of view. I wrote a blog post on my website which is going out tomorrow exactly on this topic. It's a pity, but it's what it is.

For single nodes, can you explain me how do you see those as limits? Say I have a Cisco C3260 machine (by the way they have a configured solution with Windows plus Storage Spaces alredy), I can put in it many large HDD and some SSD. I create with this two tiers, I choose simple mirror as the desired protection solution, and here I go, I have a raid10 design. And because I have the mirror option, I can leverage integrity streams.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Mike Resseler
Product Manager
Posts: 8044
Liked: 1263 times
Joined: Feb 08, 2013 3:08 pm
Full Name: Mike Resseler
Location: Belgium
Contact:

Re: VBR 9.5 - REFS

Post by Mike Resseler »

While I am not a big fan of the datacenter licensing, it depends a bit... For example, when you use it in a hyper converged way, the license doesn't matter anymore as it becomes datacenter license per node anyway (unless you deploy a really low amount of VMs per node :-))
alex1002
Enthusiast
Posts: 25
Liked: 1 time
Joined: Jan 27, 2015 6:17 pm
Full Name: Alex
Contact:

Re: VBR 9.5 - REFS

Post by alex1002 »

Sorry to kick in. Are you guys doing the refs filesystem with dedupe?
Mike Resseler
Product Manager
Posts: 8044
Liked: 1263 times
Joined: Feb 08, 2013 3:08 pm
Full Name: Mike Resseler
Location: Belgium
Contact:

Re: VBR 9.5 - REFS

Post by Mike Resseler »

Alex,

If you are talking about MSFT dedupe... That is not existing at ReFS (at this point in time... Since I know that it is the #1 request from the world on ReFS :-))
dellock6
Veeam Software
Posts: 6137
Liked: 1928 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: VBR 9.5 - REFS

Post by dellock6 »

Mike Resseler wrote:While I am not a big fan of the datacenter licensing, it depends a bit... For example, when you use it in a hyper converged way, the license doesn't matter anymore as it becomes datacenter license per node anyway (unless you deploy a really low amount of VMs per node :-))
Off-topic, as we are specifically talking about storing Veeam backups here, so hyper-convergence is not involved at all. For storing backups, datacenter license makes the design too pricey. A 4 nodes 2 sockets 8 cores to build a minimum S2D cluster to store backups, only for licenses would cost almost 200k usd street price. Too pricey to even think about it for now.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: VBR 9.5 - REFS

Post by tsightler » 2 people like this post

dellock6 wrote:A 4 nodes 2 sockets 8 cores to build a minimum S2D cluster to store backups, only for licenses would cost almost 200k usd street price. Too pricey to even think about it for now.
How do you get to $200K? As far as I understand, the retail price for Windows 2016 Datacenter is USD $6155, which covers the first 16 cores across 2 sockets (you need additional core grant licenses if you have more cores). That's only ~$25k USD street price for 4 nodes. And now you can do 2-node S2D if you really want to keep the price down while having some resiliency. Anyone with a volume license agreement will likely pay less than $5000.
ashleyw
Service Provider
Posts: 181
Liked: 30 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: VBR 9.5 - REFS

Post by ashleyw »

just from my perspective. There are a number of terms that may confuse people around here (myself included, so please correct me if I'm wrong here on any of this.).

Storage Spaces Direct: a network scale out raid of across multiple nodes (minimum 2) and requires 10GbE networking and 2016 Data Centre Licensing) - it sounds great but cannot be economically viable as a backup architecture for most shops.

Storage Pools on individual nodes: This is where a group of disks can be grouped together to form a single storage pool. SSD cache disks can be added to the pool and hot spares allocated on a per-pool basis.

A Re-FS file system can be across storage presented by Storage Spaces Direct or Storage Pools on individual nodes.
Windows dynamic disks can be created across multiple storage pools on individual nodes (to create a primitive striping across pools), but this is the worst of all worlds IMHO.

So back to the orignal problem with a backup hosts.

So If you don't use ZFS as the underlying storage like we do, and you are say limited to 24 disk bays using 4TB commodity disks what is the ideal configuration on a single node to deliver the best price per TB?
If you run raid 10 and still allow for a couple of hot spares, you are down to a usable space of 4x11TB, which doesn't give a particularly good cost per TB.
If you ran a large raid 5 set (more than 8 spindles) , you are setting yourself up for failure and the fact is the raid set would not perform particularly well.
If you run windows dynamic disks with ReFS over the top, you may be getting the benefits of integrity streams but you are creating another abstraction layer which historically hasn't been particularly reliable (from our perspective anyway).

Also in our case we need the same backup host to function as a VMware host for our other management purposes so the bare metal will run VMware ESXi with everything else running as VMs.
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: VBR 9.5 - REFS

Post by tsightler »

Out of curiosity, why do you rule out traditional, single node storage spaces + dual parity + ReFS? It supports using up to 100GB of SSD/NVMe for write-back cache, which helps significantly to overcome the traditionally poor write performance. Block clone eliminates the other biggest weakness (fairly poor random read/write I/O performance). The latter has definitely been the biggest issue I've seen with the storage spaces in the field, well, that and many people don't properly size the write back cache, which defaults to an almost useless 32MB for non-tiered storage spaces, although it's completely possible to configure bigger.
ashleyw
Service Provider
Posts: 181
Liked: 30 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: VBR 9.5 - REFS

Post by ashleyw »

I don't rule it out but for a single node continuing to use ZFS and then formatting ReFS over the top via a single Windows 2016 standard VM for us provides far more benefits and increased performance and is well proven in terms of reliability.
https://en.wikipedia.org/wiki/ZFS

Here is a fascinating insight into ZFS from the orignal two engineers; Jeff Bonwick and Bill Moore way back in 2007!
http://queue.acm.org/detail.cfm?id=1317400

Considering the ZFS architecture is nearly 10 years old, it is a tribute to the Sun engineers and if MC Hammer had a message for Microsoft it would probably be;
"you can't touch this"
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: VBR 9.5 - REFS

Post by tsightler » 1 person likes this post

Sure, I'm not arguing against ZFS, I know lots about it, heck I was using ZFS for Veeam repositories via OpenSolaris back in 2010, and I can certainly see some of the coolness of running ReFS on top of ZFS (I actually have still have that setup in my lab!), but realistically, in testing the two (ReFS on ZFS vs ReFS on Storage Spaces) head to head, I've not been able to come up with any real performance advantages to ZFS vs ReFS, actually the native ReFS solution won pretty easily in raw write throughput. Admittedly, I didn't test with as many vdevs as you have and only 12 drives, and I suspect that would make a significant difference, but then I'd also be losing more space usable space. It just feels like it's getting harder to justify having the extra ZFS layer in there.
ashleyw
Service Provider
Posts: 181
Liked: 30 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: VBR 9.5 - REFS

Post by ashleyw »

from a technical perspective, I have now got 9.5 (I just used my pro partner login).
I did an in-place upgrade to 9.5 - everything went flawless.
Things are working as expected,except at the start of this thread, it was suggested that if a Re-FS file system is detected then the align backup file data blocks option (under advances settings for repository) would be greyed out. In our situation, the "align backup file data blocks" option is not greyed out or selected.

How do I know if Re-FS is being used properly or not?

thanks.
adapterer
Expert
Posts: 227
Liked: 46 times
Joined: Oct 12, 2015 11:24 pm
Contact:

Re: VBR 9.5 - REFS

Post by adapterer »

Force a synthetic full and watch space consumption?

To do this I just adjust job settings to create synthetic daily and change the server calendar forward one day each time I run the job.
ashleyw
Service Provider
Posts: 181
Liked: 30 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: VBR 9.5 - REFS

Post by ashleyw »

hmm, yeah might do that but it will end up corrupting our backup database, so might need to wait and see (or plan tests carefully). If you go to the advanced options on the repository do you see the "align backup file data blocks" greyed out? Also does the Veeam server itself need to be on Windows 2016 or just the backup repository hosting the Re-Fs? Currently our Veeam server is on windows 2012R2 and just our repository server set as a windows 2016 server at this stage.
adapterer
Expert
Posts: 227
Liked: 46 times
Joined: Oct 12, 2015 11:24 pm
Contact:

Re: VBR 9.5 - REFS

Post by adapterer »

If you are worried, just do a config backup first? But yeah I wouldnt test on 'prod' backups - maybe a test job?

Yes align data blocks is greyed out on mine but I did not deploy this until after 9.5 was installed.
ashleyw
Service Provider
Posts: 181
Liked: 30 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: VBR 9.5 - REFS

Post by ashleyw »

great - I've just added a new repository using the same server as previously but a new 60TB drive formatted as ReFS and now i see the option greyed out as expected.. great stuff.
So in my case the issue seems to be if you have an existing windows 2016 server hosting a Re-FS file system and you run an in-place upgrade, then there doesn't seem to be a way in the UI for it to properly detect the Re-FS file system. So I think if I switch the new backup jobs to that one then I should be good!
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: VBR 9.5 - REFS

Post by tsightler »

ashleyw wrote:great - I've just added a new repository using the same server as previously but a new 60TB drive formatted as ReFS and now i see the option greyed out as expected.. great stuff.
So in my case the issue seems to be if you have an existing windows 2016 server hosting a Re-FS file system and you run an in-place upgrade, then there doesn't seem to be a way in the UI for it to properly detect the Re-FS file system. So I think if I switch the new backup jobs to that one then I should be good!
Yeah, that's what I was trying to say in this post, sorry if that wasn't completely clear. There's simply no way to upgrade an existing, already created repo to support block clone, even if it was created previously on a properly supported ReFS volume. You can create a new repository on the same volume, move the files there (which would be basically instant since it's the same volume), and then remap all of the existing backups to this "new" repo, however, block clone still won't be enabled on the backup chain until the next full is created, either actively or synthetically.
dellock6
Veeam Software
Posts: 6137
Liked: 1928 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: VBR 9.5 - REFS

Post by dellock6 » 1 person likes this post

tsightler wrote:How do you get to $200K? As far as I understand, the retail price for Windows 2016 Datacenter is USD $6155, which covers the first 16 cores across 2 sockets (you need additional core grant licenses if you have more cores). That's only ~$25k USD street price for 4 nodes. And now you can do 2-node S2D if you really want to keep the price down while having some resiliency. Anyone with a volume license agreement will likely pay less than $5000.
My bad, I read around a couple of presentations where they were stating 2 cores per license, so I counted 64 cores over 4 nodes as 32 licenses needed, not 4.
So to recap and correct my statement for future readers, 16 cores are 6155 usd street price , so a 4 nodes 2 socket 8 cores cluster can be licensed with 25.000 usd. That's not bad and changes a lot. Many commercial backup appliances have a higher price for even a single machine. Sorry for the confusion.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
ashleyw
Service Provider
Posts: 181
Liked: 30 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: VBR 9.5 - REFS

Post by ashleyw »

tsightler wrote: Yeah, that's what I was trying to say in this post, sorry if that wasn't completely clear. There's simply no way to upgrade an existing, already created repo to support block clone, even if it was created previously on a properly supported ReFS volume. You can create a new repository on the same volume, move the files there (which would be basically instant since it's the same volume), and then remap all of the existing backups to this "new" repo, however, block clone still won't be enabled on the backup chain until the next full is created, either actively or synthetically.
Great stuff thanks Tom! I created a new repository pointing to the same Win2016 REFS server volume but made the directory e:\backup instead of previous e:\backups.
Then I went in and checked the Align blocks setting was greyed on that repository - which it was.
I then moved the old backup directories from the e:\backups to e:\backup and then went into each job and changed the repository target to be the new one (as well as the replicate meta data store and the configuration backup location).
All good.
I've now triggered an active full backup on the job chain and it's grinding away. I'm not seeing any significant load on the Re-FS or ZFS layer so I've increased the maximum concurrent task through to 6.
ashleyw
Service Provider
Posts: 181
Liked: 30 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: VBR 9.5 - REFS

Post by ashleyw »

just a quick heads up - we triggered the first job in the backup chain and said "Active Full" - I noticed that the subsequent chained jobs were still run as incremental jobs so from the UI, you need to select each job separately and select "Active Full".
dellock6
Veeam Software
Posts: 6137
Liked: 1928 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: VBR 9.5 - REFS

Post by dellock6 »

Hi, this is expected. The chaining is about executing the job, but it doesn't change the way the job is executed. It's like manually hitting "run" from the console, if an incremental was planned, that's the type that it would run.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
JoshuaPostSAMC
Expert
Posts: 124
Liked: 22 times
Joined: Jul 30, 2015 7:32 pm
Contact:

[MERGED] EMC Isilon connection recommendations

Post by JoshuaPostSAMC »

We have an EMC Isilon with a ton of storage that I'll be able to utilize for Veeam backups among other things.

Today we have it setup as a CIFS repository with the backup server being a VM connected to it over 10 GB network. I just found out that it can also present storage as NFS or iSCSI, so looking to see if there would be improvements to switch to one of these, although it would still all go over the network.

I've read something about using NFS and a Linux server for the repository, but what gains would there be?
I also saw that Server 2016 with ReFS has some impressive gains, but would those still apply to an iSCSI attached disk, or would that only really apply to local disk?

I'm not necessarily dissatisfied with performance today other than Merge/Compact times, but any improvements are always welcome.
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

[MERGED] Re: VBR 9.5 - REFS

Post by PTide »

It should work fine with an iSCSI attached disk since there is no difference for system if the disk is locally attached or via iSCSI - it is still a block device.

Thanks
Post Reply

Who is online

Users browsing this forum: Bing [Bot], dnaxy, Ivan239 and 166 guests