-
- Enthusiast
- Posts: 60
- Liked: 10 times
- Joined: Sep 21, 2016 8:31 am
- Full Name: Kristian Leth
- Contact:
VBR 9.5 - REFS
Hello,
We are planning on establising a new VBR Disaster backup solution, and will use VBR 9.5 when release (hopefully within days).
The setup will be as follows:
1 All in one VBR 9.5 Server (Proxy, STG Repository, etc...) - Will be phsysical
1 iSCSI based Synology server, providing the storage to the VBR Server.
The VBR server will be in the same Layer2 network, as the Hyper-V and VMware servers - so there shouldnt be any firewall / ACLs bottlenecks.
Ive read that using Veeam with backup repositorys based on REFS, should increase the performance of the backup and restore greatly!
veeam-backup-replication-f2/question-re ... ml#p213647
If we want to get the benefits from ReFS on Windows Server 2016, will we then need to format the iSCSI disk as ReFS - or what do we need to do, to accomplish this?
Also another minor question, that im sure there is an easy answer to, but i havent been able to find it.
When installing a VBR server in a VMware enviroment, i believe that Veeam uses the HotADD function in VMware to speed up the process.
If we install the VBR server as a phsysical server, how will the VBR server then be able to HotADD the disk - or is this not relevant?
I hope some will take the time, to respond to the questions, and maybe provide some best practices and such
Thanks in advance!
We are planning on establising a new VBR Disaster backup solution, and will use VBR 9.5 when release (hopefully within days).
The setup will be as follows:
1 All in one VBR 9.5 Server (Proxy, STG Repository, etc...) - Will be phsysical
1 iSCSI based Synology server, providing the storage to the VBR Server.
The VBR server will be in the same Layer2 network, as the Hyper-V and VMware servers - so there shouldnt be any firewall / ACLs bottlenecks.
Ive read that using Veeam with backup repositorys based on REFS, should increase the performance of the backup and restore greatly!
veeam-backup-replication-f2/question-re ... ml#p213647
If we want to get the benefits from ReFS on Windows Server 2016, will we then need to format the iSCSI disk as ReFS - or what do we need to do, to accomplish this?
Also another minor question, that im sure there is an easy answer to, but i havent been able to find it.
When installing a VBR server in a VMware enviroment, i believe that Veeam uses the HotADD function in VMware to speed up the process.
If we install the VBR server as a phsysical server, how will the VBR server then be able to HotADD the disk - or is this not relevant?
I hope some will take the time, to respond to the questions, and maybe provide some best practices and such
Thanks in advance!
-
- Product Manager
- Posts: 6535
- Liked: 762 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: VBR 9.5 - REFS
Hi,
Thank you.
Correct. You just mount the storage via iSCSI on the Win2016 server and format it as ReFS. I believe this link is also worth checking.If we want to get the benefits from ReFS on Windows Server 2016, will we then need to format the iSCSI disk as ReFS - or what do we need to do, to accomplish this?
You need to have VM proxy on the host. Please check the requirements for details.If we install the VBR server as a phsysical server, how will the VBR server then be able to HotADD the disk - or is this not relevant?
Thank you.
-
- Enthusiast
- Posts: 60
- Liked: 10 times
- Joined: Sep 21, 2016 8:31 am
- Full Name: Kristian Leth
- Contact:
Re: VBR 9.5 - REFS
Hi,
Thank you so much for clearing this up for me
Ive already checked the first link, but it seemed to simple to be true - but i guess it dosent have to be complicated
The second link where very interresting, it gave me alot of new info relevant for this project
Just to clarify, we dont need a Proxy server for backing up Hyper-V machines - when can use our physical VBR Server for this - correct?
Thank you so much for clearing this up for me
Ive already checked the first link, but it seemed to simple to be true - but i guess it dosent have to be complicated
The second link where very interresting, it gave me alot of new info relevant for this project
Just to clarify, we dont need a Proxy server for backing up Hyper-V machines - when can use our physical VBR Server for this - correct?
-
- Product Manager
- Posts: 6535
- Liked: 762 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: VBR 9.5 - REFS
It's a little bit different story with proxies in a Hyper-V environment, there are two types - on-host proxy and off-host proxy. By default a Hyper-V host where your VMs reside at is used as a proxy, this is called "on-host proxy". You can assign a proxy role to another server, this is called "off-host proxy". Please check this article for more info.
Thanks
Thanks
-
- Enthusiast
- Posts: 60
- Liked: 10 times
- Joined: Sep 21, 2016 8:31 am
- Full Name: Kristian Leth
- Contact:
Re: VBR 9.5 - REFS
Hello,
Thanks the article regarding on-host vs off-host proxy, it gave me alot of info
From what i understood from the article, Veeam does not recommend using a VM to act as a off-host proxy node.
Since we are running a fairly small Hyper-V setup (7 hosts), and only need to backup approx 5-10% of the VMs with Veeam, i then believe on-host proxy is the best solution - agree?
I can easily understand why Veeam dosent want an VM to be off-host proxy, but its simply way to pricy, to buy a new dedicated server to run the off-host proxy on - specially when we are talking about moving approx 5-10 GB of data each day
Thanks again for taking your time, to provide me with usefull information
Thanks the article regarding on-host vs off-host proxy, it gave me alot of info
From what i understood from the article, Veeam does not recommend using a VM to act as a off-host proxy node.
Since we are running a fairly small Hyper-V setup (7 hosts), and only need to backup approx 5-10% of the VMs with Veeam, i then believe on-host proxy is the best solution - agree?
I can easily understand why Veeam dosent want an VM to be off-host proxy, but its simply way to pricy, to buy a new dedicated server to run the off-host proxy on - specially when we are talking about moving approx 5-10 GB of data each day
Thanks again for taking your time, to provide me with usefull information
-
- Product Manager
- Posts: 6535
- Liked: 762 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: VBR 9.5 - REFS
Not only the amount of data to be transferred matters, but also the number of jobs and virtual drives per job. Please check the system requirements in order to get a clear vision of how much resources you might need. Oh-host proxy should work fine for 5-10GB of daily data transfer until you have enough resources to process all the disks in the jobs.
Thanks.
Thanks.
-
- Service Provider
- Posts: 205
- Liked: 38 times
- Joined: Oct 28, 2010 10:55 pm
- Full Name: Ashley Watson
- Contact:
Re: VBR 9.5 - REFS
Hi, We are busy refreshing our backup infrastructure ready for 9.5.
We are running a single node hyperconverged unit running vmware with 23x3TB drives + 256GB SSD and 96GB ram.
We run our main Veeam controller and 4 proxies on there - the proxies pull form separate FC connected primary storage.
Currently we present storage to Veeam by CIFS from our ZFS (OmniOS) VM.
We have now stood up a windows 2016 server VM on the host and used iscsi (via OmniOS) on a loopback adaptor to present a 60TB block device to the Windows 2016 server which we can then represent as a REFS share. Initial IOMeter tests look very promising.
In the settings of 9.5, is it the backup repository that would need to be changed? (and then obviosuly the job mappings to repository).
Currently We only have options for MIcrosoft Windows Server, Linux Server, Shared Folder or Deduplicating storage appliance.
Will there be a separate option for REFS or are the REFS connections going to be done in a different way or detected through the "Microsoft windows Server" option?
thanks.
We are running a single node hyperconverged unit running vmware with 23x3TB drives + 256GB SSD and 96GB ram.
We run our main Veeam controller and 4 proxies on there - the proxies pull form separate FC connected primary storage.
Currently we present storage to Veeam by CIFS from our ZFS (OmniOS) VM.
We have now stood up a windows 2016 server VM on the host and used iscsi (via OmniOS) on a loopback adaptor to present a 60TB block device to the Windows 2016 server which we can then represent as a REFS share. Initial IOMeter tests look very promising.
In the settings of 9.5, is it the backup repository that would need to be changed? (and then obviosuly the job mappings to repository).
Currently We only have options for MIcrosoft Windows Server, Linux Server, Shared Folder or Deduplicating storage appliance.
Will there be a separate option for REFS or are the REFS connections going to be done in a different way or detected through the "Microsoft windows Server" option?
thanks.
-
- Product Manager
- Posts: 8181
- Liked: 1315 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: VBR 9.5 - REFS
Ashley,
It will be Microsoft Windows Server or (I believe) even a Shared Folder. B&R will detect ReFS automatically and make sure that everything is being done as planned.
I haven't seen the latest documentation yet but if I am not mistaken, B&R will even automatically turn on integrity streams (which is important for detecting corruption but also it changes the behavior of ReFS (in a good way ).
Hope it helps
Mike
It will be Microsoft Windows Server or (I believe) even a Shared Folder. B&R will detect ReFS automatically and make sure that everything is being done as planned.
I haven't seen the latest documentation yet but if I am not mistaken, B&R will even automatically turn on integrity streams (which is important for detecting corruption but also it changes the behavior of ReFS (in a good way ).
Hope it helps
Mike
-
- Veeam Software
- Posts: 1813
- Liked: 653 times
- Joined: Mar 02, 2012 1:40 pm
- Full Name: Timothy Dewin
- Contact:
Re: VBR 9.5 - REFS
You will be able to check detection of ReFS by going to the advanced settings of the repository. If ReFS is detected, the "Align backup file data blocks" checkbox and text should be grayed out
-
- Product Manager
- Posts: 6535
- Liked: 762 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: VBR 9.5 - REFS
Hi Ashley,
Did I get it right - you want to share ZFS LUN from a UNIX VM via iSCSI to a Windows VM running on the same host, format it as ReFS and share it to your Veeam VM (which is another VM) via CIFS and use it as a repo?
Did I get it right - you want to share ZFS LUN from a UNIX VM via iSCSI to a Windows VM running on the same host, format it as ReFS and share it to your Veeam VM (which is another VM) via CIFS and use it as a repo?
-
- Service Provider
- Posts: 205
- Liked: 38 times
- Joined: Oct 28, 2010 10:55 pm
- Full Name: Ashley Watson
- Contact:
Re: VBR 9.5 - REFS
well there are some problems with using Windows storage spaces as a backup target...
Windows does not allow for striping across multiple storage pools within the same OS instance - so the performance is limited to a single storage pool.
Also the concept of global hot spares for multiple storage pools is missing form windows storage spaces - the spares need to be dedicated to a storage pool.
ZFS allows 1 or more SSDs to be used as an l2arc cache on z zpool (ie read cache), and allows for striping across multiple vdevs (each of which is in effect a raid group).
So to overcome these limitations, what we have is the following;
- a single Supermircro chassis with 23 x 3TB SATA disks in + 1x256GB SSD, with a dual socket motherboard and about 128GB ram. All drives are on a single LSI 2008 controller, there are 2 rear facing 128GB SSDs.
- we install VMware onto the mirrored set of the 2 rear facing SSDs.
- We run a virtual machine for OmniOS with the 23 3TB drivers and SSD as raw device mappings.
- We create a zpool undef OmniOS (Open Solaris fork now based on Illumos kernel and we use Napp-IT for easy UI) in the following format;
BackupPool
-vdev: raidz1-0, 5 x 3TB SATA
-vdev: raidz1-1, 5 x 3TB SATA
-vdev: raidz1-2, 4 x 3TB SATA
-vdev: raidz1-3, 4 x 3TB SATA
-vdev: raidz1-4, 4 x 3TB SATA
-l2arc cache drive: 256GB SSD
- 1 global spare.
each vdev is configured as a raidz (similar to raid 5).
the performance of the BackupPool is equivalent to 5 vdevs as writes are striped over the vdevs.
We have sync disabled and a couple of other tweaks which means writes are cached in ram for maximum performance.
- We expose a virtual 60TB block device via the comstar iscsi interface of OmniOS via a loopback connector to other VMs on the same VMware host.
- On the same VMware host we run a VM running windows 2016 Server with the iscsi initiator so that we can see a 60TB block device inside the OS and we can then format this as a ReFS file system in preparation for Veeam 9.5.
- On the same VMware host we also run the Veeam controller VM, and 4 separate Veeam Proxies so we can the parallelism for our workloads for maximum throughput.
- On the same host we also run other management layers like vcentre, SQL server DB for vcentre etc.
- Because all the veeam connectivity to the virtual iscsi device etc is via a loop back connection, we are not constrained by the throughput of the NICs.
It may seem an overkill but this configuration seems to deliver the permanence we need at a budget price point, so we are going to see how well it runs as we are starting to move our backup jobs to hit the ReFS file system. If Microsoft can address the shortfalls of storage pools (i.e. lack of global spares, lack of striping over pool sets, lack of equivalent l2arc and ram caches, then we'd move from OmniOS at an instant and just switch our storage layer to Windows - but as everything is a VM anyway - it is trivial to switch form one to the other).
If there is any chance someone could PM me a link to 9.5 for beta testing (or better the RTM), it would be great -as I'd be able to properly verify this configuration on 9.5.
Windows does not allow for striping across multiple storage pools within the same OS instance - so the performance is limited to a single storage pool.
Also the concept of global hot spares for multiple storage pools is missing form windows storage spaces - the spares need to be dedicated to a storage pool.
ZFS allows 1 or more SSDs to be used as an l2arc cache on z zpool (ie read cache), and allows for striping across multiple vdevs (each of which is in effect a raid group).
So to overcome these limitations, what we have is the following;
- a single Supermircro chassis with 23 x 3TB SATA disks in + 1x256GB SSD, with a dual socket motherboard and about 128GB ram. All drives are on a single LSI 2008 controller, there are 2 rear facing 128GB SSDs.
- we install VMware onto the mirrored set of the 2 rear facing SSDs.
- We run a virtual machine for OmniOS with the 23 3TB drivers and SSD as raw device mappings.
- We create a zpool undef OmniOS (Open Solaris fork now based on Illumos kernel and we use Napp-IT for easy UI) in the following format;
BackupPool
-vdev: raidz1-0, 5 x 3TB SATA
-vdev: raidz1-1, 5 x 3TB SATA
-vdev: raidz1-2, 4 x 3TB SATA
-vdev: raidz1-3, 4 x 3TB SATA
-vdev: raidz1-4, 4 x 3TB SATA
-l2arc cache drive: 256GB SSD
- 1 global spare.
each vdev is configured as a raidz (similar to raid 5).
the performance of the BackupPool is equivalent to 5 vdevs as writes are striped over the vdevs.
We have sync disabled and a couple of other tweaks which means writes are cached in ram for maximum performance.
- We expose a virtual 60TB block device via the comstar iscsi interface of OmniOS via a loopback connector to other VMs on the same VMware host.
- On the same VMware host we run a VM running windows 2016 Server with the iscsi initiator so that we can see a 60TB block device inside the OS and we can then format this as a ReFS file system in preparation for Veeam 9.5.
- On the same VMware host we also run the Veeam controller VM, and 4 separate Veeam Proxies so we can the parallelism for our workloads for maximum throughput.
- On the same host we also run other management layers like vcentre, SQL server DB for vcentre etc.
- Because all the veeam connectivity to the virtual iscsi device etc is via a loop back connection, we are not constrained by the throughput of the NICs.
It may seem an overkill but this configuration seems to deliver the permanence we need at a budget price point, so we are going to see how well it runs as we are starting to move our backup jobs to hit the ReFS file system. If Microsoft can address the shortfalls of storage pools (i.e. lack of global spares, lack of striping over pool sets, lack of equivalent l2arc and ram caches, then we'd move from OmniOS at an instant and just switch our storage layer to Windows - but as everything is a VM anyway - it is trivial to switch form one to the other).
If there is any chance someone could PM me a link to 9.5 for beta testing (or better the RTM), it would be great -as I'd be able to properly verify this configuration on 9.5.
-
- Influencer
- Posts: 11
- Liked: 3 times
- Joined: Apr 09, 2016 12:12 am
- Full Name: Sanjay kumar
- Contact:
Re: VBR 9.5 - REFS
ksl28 wrote: From what i understood from the article, Veeam does not recommend using a VM to act as a off-host proxy node.
I think apart from the implementation and dependency on Microsoft VSS (volume shadow service) , implementing an off-host proxy as VM will defeat one of the important purpose of keeping the Hyper-V host out of data-transfer path.ksl28 wrote:I can easily understand why Veeam dosent want an VM to be off-host proxy, but its simply way to pricy,
-
- Service Provider
- Posts: 205
- Liked: 38 times
- Joined: Oct 28, 2010 10:55 pm
- Full Name: Ashley Watson
- Contact:
Re: VBR 9.5 - REFS
one additional question around ReFS. We had to raise the ram footprint on the 4xVeeam engines we are running to 16GB ram each due to weekly roll ups causing ram bloat and failures on the transformation jobs.
Will the ReFS target mean we'll be able to drop the RAM footprint, or will the RAM still be used in the same way as before (I'd expect RAM requirements to be much lower due to the ReFS pointers to the previous incrementals)?
Will the ReFS target mean we'll be able to drop the RAM footprint, or will the RAM still be used in the same way as before (I'd expect RAM requirements to be much lower due to the ReFS pointers to the previous incrementals)?
-
- Product Manager
- Posts: 8181
- Liked: 1315 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: VBR 9.5 - REFS
Ashley, from what we can see there is indeed less resources needed. We always take about the lesser I/O necessary, and the shorter time needed to perform the synthetic full due to the block-cloning API, but it will also require less RAM and CPU. Unfortunately, I can't give you a number (like 3 x less or something) on those resources. It will be (Certainly in the beginning) monitoring and baselining. The technology is rather new (but heavily tested by MSFT so no worries there...) so we will get better insight and numbers (thanks to you all ) in the future.
It is a very good question though, as I said, we focused heavily on I/O and transformation speed, but this is certainly worth knowing also!
Thanks for the question
Mike
It is a very good question though, as I said, we focused heavily on I/O and transformation speed, but this is certainly worth knowing also!
Thanks for the question
Mike
-
- Enthusiast
- Posts: 58
- Liked: 5 times
- Joined: Apr 23, 2014 9:51 am
- Full Name: Andy Goldschmidt
- Contact:
Re: VBR 9.5 - REFS
Nice Post, we have similar hardware space but use ZFS on Linux though, so I'm curious to see what you think of Win 2016 and ReFS as a replacement to ZFS. (do you have a blog or any guides you follow to get your ZFS setup?)
ashleyw wrote:well there are some problems with using Windows storage spaces as a backup target...
Windows does not allow for striping across multiple storage pools within the same OS instance - so the performance is limited to a single storage pool.
Also the concept of global hot spares for multiple storage pools is missing form windows storage spaces - the spares need to be dedicated to a storage pool.
ZFS allows 1 or more SSDs to be used as an l2arc cache on z zpool (ie read cache), and allows for striping across multiple vdevs (each of which is in effect a raid group).
-= VMCE v9 certified =-
-
- Enthusiast
- Posts: 60
- Liked: 10 times
- Joined: Sep 21, 2016 8:31 am
- Full Name: Kristian Leth
- Contact:
Re: VBR 9.5 - REFS
Hello,
I have one more question, regarding VBR 9.0 and W2016 with ReFS.
We are planning on deploying Veeam within days, and since VBR 9.5 is just around the corner, we wanted to install VBR 9.0 on a Windows Server 2016 host.
When VBR 9.5 is released, we should be able to just upgrade the VBR 9.0 to 9.5, and therefor dont have to reinstall the physical host.
When installing VBR 9.0 on W2016, we are planning to format the iSCSI luns as ReFS from the beginning - but is this supported in VBR 9.0?
If it is supported, will VBR 9.5 automatically use the new features in ReFS, to speed up the process and do consistency check?
Thanks in advance!
I have one more question, regarding VBR 9.0 and W2016 with ReFS.
We are planning on deploying Veeam within days, and since VBR 9.5 is just around the corner, we wanted to install VBR 9.0 on a Windows Server 2016 host.
When VBR 9.5 is released, we should be able to just upgrade the VBR 9.0 to 9.5, and therefor dont have to reinstall the physical host.
When installing VBR 9.0 on W2016, we are planning to format the iSCSI luns as ReFS from the beginning - but is this supported in VBR 9.0?
If it is supported, will VBR 9.5 automatically use the new features in ReFS, to speed up the process and do consistency check?
Thanks in advance!
-
- Product Manager
- Posts: 20353
- Liked: 2285 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: VBR 9.5 - REFS
VB&R 9.0 doesn't support Windows Server 2016, meaning it cannot backup VMs running Windows Server 2016, it cannot backup VMs running on Windows Server 2016, and it cannot be installed on top of it. Thanks.
-
- Product Manager
- Posts: 6535
- Liked: 762 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: VBR 9.5 - REFS
Ashley,
Thanks for the detailed explanation! Please keep in mind that in order to leverage ReFS capabilities when using a CIFS share repository you'll need to assign roles of mount host and gateway server to windows 2016 machine (not necessarily the same that hosts the share).
Thanks
Thanks for the detailed explanation! Please keep in mind that in order to leverage ReFS capabilities when using a CIFS share repository you'll need to assign roles of mount host and gateway server to windows 2016 machine (not necessarily the same that hosts the share).
Thanks
-
- Service Provider
- Posts: 205
- Liked: 38 times
- Joined: Oct 28, 2010 10:55 pm
- Full Name: Ashley Watson
- Contact:
Re: VBR 9.5 - REFS
The reason we didn't go for ZFS on Linux is that its not properly supported by Napp-it. As we aren't Solaris experts we chose to manage our ZFS layer largely by webui through the napp-it interface.andyg wrote:Nice Post, we have similar hardware space but use ZFS on Linux though, so I'm curious to see what you think of Win 2016 and ReFS as a replacement to ZFS. (do you have a blog or any guides you follow to get your ZFS setup?)
Our primary storage is coming off an IBM DS Fibre channel storage unit - we are switching about 70TB of primary storage to a Nimble all Flash array within the next month - at this stage the hyper converged setups were deemed to be too risky for us - Windows 2016 only went RTM recently (despite us having some hyper converged units running storage spaces direct for some time on technical preview)..
For a single node backup/management host, IMHO win2016 and ReFS doesn't cut the mustard due to lack of striping across pool sets and restrictions about the use of SSD acceleration and lack of global hot spares. So unless you have a 4 node hyper converged infrastructure running just for backups, the main benefit of using Re-FS on top of ZFS is that we get maximum performance and capacity on the spindles with the benefits of application aware de-dupe in Server 2016 that is coming in Veeam 9.5. End result is that our backup and management layer costs can be kept down to the best bang for the buck.
However for a multi node hyper converged solution for primary storage I'd run 2016 with storage spaces direct in preference over other solutions right now if we were a hyperv shop (but we are a vmware shop currently and are likely to stay that way for the foreseeable future particularly with the shift to core licensing model of Data Centre Edition).
The beauty of using an architecture like we have is that we are currently exposing the old CIFS share to the Re-FS VM, and copying the old backup files from there and seeding the new Re-FS file system, so that we can carry on the backups where they left off and switch over without an issue to our new architecture. When 9.5 comes along we'll just enable the check box that says Re-FS file system and we'll start getting the speed up performances promised, but in the mean time we'll be hopefully getting the de-dupe benefits as the ReFS system is being seeded.
By continuing to use ZFS under the hood we are also protecting against things like bit rot in a more proven way as only time will tell as to the reliability of Re-FS.
I look after a large dev shop over here, so we have a fair amount of flexibility - our set up really just comes from a 100% virtualised approach (including vcentre and all Veeam components) and lots of experimentation and reading! so we make it all up as we go along
-
- VP, Product Management
- Posts: 6027
- Liked: 2855 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: VBR 9.5 - REFS
Just be aware that it doesn't exactly work like that. There's not just a "check box that says ReFS" and there's actually no supported method for an in-place upgrade of an existing repository that was created without fast clone support to one that does support fast clone. You have to create a new repository that will be automatically detected as ReFS and also configured with the correct cluster size (4K or 64K). Even if you copy the existing backups into the new repo and import them, then map them to jobs (or just recreate the new repo in the same place as the old one), you still won't get any of the ReFS improvements until new full backups are created, either active or old-style synthetic, so that the new VBK files are properly block aligned with the filesystem.ashleyw wrote:When 9.5 comes along we'll just enable the check box that says Re-FS file system and we'll start getting the speed up performances promised, but in the mean time we'll be hopefully getting the de-dupe benefits as the ReFS system is being seeded.
Also, there's not de-deup benefits from ReFS. I guess you can somewhat consider using block clone as de-dupe if you're keeping multiple synthetic fulls or GFS points, but since versions prior to 9.5 didn't support block clone, there's no space-savings benefits with ReFS for backups created prior to 9.5 and even with 9.5 the benefits can only exist in the scenario that the repo was recognized as ReFS at repository creation time.
Perhaps you were already aware of all of that, but I just wanted to make sure that any others who read this were clear that it's not quite as simple a just a checkbox to migrate existing backups/repositories and get all the goodness, there's a little more work involved.
-
- Service Provider
- Posts: 205
- Liked: 38 times
- Joined: Oct 28, 2010 10:55 pm
- Full Name: Ashley Watson
- Contact:
Re: VBR 9.5 - REFS
thanks very much for the insightful information Tom, I was not aware of that as I haven't seen any of the release notes detailing that level of detail.tsightler wrote: Perhaps you were already aware of all of that, but I just wanted to make sure that any others who read this were clear that it's not quite as simple a just a checkbox to migrate existing backups/repositories and get all the goodness, there's a little more work involved.
It looks like we just need to get out mits on 9.5 to feel the Veeam love! (please can someone send it to me .
Its a pain there isn't a way of importing/migrating existing backups though as that may present a challenge for most deployments due to lack of spare space (I think we can work around that though in our case).
-
- VP, Product Management
- Posts: 6027
- Liked: 2855 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: VBR 9.5 - REFS
I think for most people the extra space will be needed anyway as the only way get to ReFS 3.1 is to reformat (there's no upgrade from NTFS or previous ReFS versions) so they'll either have to start with a new repo (easy way), or have some temporary place to copy off all of their backups, reformat, recreate the repo so that it can be recognized as ReFS, copy all the backups back to new repo, rescan and remap existing backups. With the latter option they still won't get the benefits until the next synthetic or active full, because the old backups won't be block aligned.
Most likely there aren't very many people with backup repos that are already on Windows 2016, especially since it's not a supported platform for running v9.0, so trying to upgrade an existing repo with backups shouldn't really be an issue.
Most likely there aren't very many people with backup repos that are already on Windows 2016, especially since it's not a supported platform for running v9.0, so trying to upgrade an existing repo with backups shouldn't really be an issue.
-
- Veteran
- Posts: 528
- Liked: 144 times
- Joined: Aug 20, 2015 9:30 pm
- Contact:
Re: VBR 9.5 - REFS
When you say synthetic full, would that just be the next backup if you're doing forward-forever incremental?
-
- VP, Product Management
- Posts: 6027
- Liked: 2855 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: VBR 9.5 - REFS
No, the merge for a forever forward does not recreate the VBK, it only merges blocks from the oldest VIB into the already existing VBK. It needs to be an operation that creates a new VBK so that the blocks are aligned with the ReFS cluster, thus it has to be a synthetic full or active full backup. In theory, I'd guess a maintenance operation of defragment and compact should work as well since that process also creates a new VBK and then discards the old one, but someone else will have to confirm as I haven't tested that actual scenario.nmdange wrote:When you say synthetic full, would that just be the next backup if you're doing forward-forever incremental?
-
- Service Provider
- Posts: 205
- Liked: 38 times
- Joined: Oct 28, 2010 10:55 pm
- Full Name: Ashley Watson
- Contact:
Re: VBR 9.5 - REFS
I'll gladly help to test if someone would give me the secret handshake on acquiring the 9.5 beta/RTM.
-
- Expert
- Posts: 227
- Liked: 46 times
- Joined: Oct 12, 2015 11:24 pm
- Contact:
Re: VBR 9.5 - REFS
Well you can colour me impressed.. initial testing below . Synthetic full creation is very fast too.
Something that worries me though is long term retention.. given that these are all synthetic backups, it does seem to create somewhat of a house-of-cards scenario, given that a some corrupt blocks in the original full would render all fulls corrupt, if I've got this right? What can be done to alleviate this? (Short of snapshots/replication). Is that where Storage Spaces comes in?
Something that worries me though is long term retention.. given that these are all synthetic backups, it does seem to create somewhat of a house-of-cards scenario, given that a some corrupt blocks in the original full would render all fulls corrupt, if I've got this right? What can be done to alleviate this? (Short of snapshots/replication). Is that where Storage Spaces comes in?
-
- Product Manager
- Posts: 8181
- Liked: 1315 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: VBR 9.5 - REFS
Nice numbers! Love it
There is indeed the risk of corruption / bit rot / ... Storage Spaces Direct (S2D) comes into play here because through the integrity streams being enabled ReFS can detect this and when you use it on S2D it can also correct it. It can't correct it on a ReFS volume alone though. But important is that we are working with backups, so the alternative is the 3-2-1 rule! If this is your first copy of your data, you can use BCJ and store the information on a different medium after that (cloud connect / tape / cheap JBOD on a separate site and so on...).
Mike
There is indeed the risk of corruption / bit rot / ... Storage Spaces Direct (S2D) comes into play here because through the integrity streams being enabled ReFS can detect this and when you use it on S2D it can also correct it. It can't correct it on a ReFS volume alone though. But important is that we are working with backups, so the alternative is the 3-2-1 rule! If this is your first copy of your data, you can use BCJ and store the information on a different medium after that (cloud connect / tape / cheap JBOD on a separate site and so on...).
Mike
-
- VeeaMVP
- Posts: 6162
- Liked: 1970 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: VBR 9.5 - REFS
Quick correction, you can use integrity streams also on regular storage spaces, as long as you use mirror or parity protection.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Service Provider
- Posts: 183
- Liked: 40 times
- Joined: Apr 27, 2012 1:10 pm
- Full Name: Sebastian Hoffmann
- Location: Germany / Lohne
- Contact:
Re: VBR 9.5 - REFS
That's not correct. You should say it isn't supportedv.Eremin wrote:VB&R 9.0 doesn't support Windows Server 2016, meaning it cannot backup VMs running Windows Server 2016, it cannot backup VMs running on Windows Server 2016, and it cannot be installed on top of it. Thanks.
I already installed a new B&R 9.0 server with Server 2016 and a ReFS volume as the main backup repository
Now I'm waiting every day for the release of 9.5 to upgrade B&R
VMCE 7 / 8 / 9, VCP-DC 5 / 5.5 / 6, MCITP:SA
Blog: machinewithoutbrain.de
Blog: machinewithoutbrain.de
-
- Product Manager
- Posts: 8181
- Liked: 1315 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: VBR 9.5 - REFS
Sebastian,
What do you mean exactly? You installed B&R on a 2012 R2 server and connect a Backup repository that is located on a 2016 server to it?
What do you mean exactly? You installed B&R on a 2012 R2 server and connect a Backup repository that is located on a 2016 server to it?
Who is online
Users browsing this forum: Bing [Bot], Google [Bot] and 92 guests