Yes, you can leverage ReFS integration even for Cloud Connect repositories. Otherwise in my role why would I be so interested in this topic?
And so you know, we tested (thanks to Preben) ReFS integration even with encryption enabled, and it works. So you can receive encrypted backups from tenants and still leverage this new capability. I'm really excited too about the benefits that a service provider can gain using ReFS for Cloud Connect.
Luca Dell'Oca Principal EMEA Cloud Architect @ Veeam Software
For those looking for information of ReFS upgrade:
From Windows Server 2012 R2 (ReFs 1.2) to Windows 2016: ReFS is NOT upgraded (not by an in place upgrade and not by presenting an existing ReFS 1.2 LUN to a Windows Server 2016). Those already using ReFS or thinking about using ReFS on Windows Server 2012 R2 now for "future proofing" should know they'll have to do a data migration. You can keep your data on ReFS 1.2 and it's readable by Windows Server 2016 but you wont gain the great capabilities you're after.
An upgrade from Windows Server 2016 TPv5 (ReFS 3.0) to Windows 2016 RTM (ReFS 3.0 or 3.x?) is supported by MSFT. But just like Veeam B&R 9.5 Beta it's all just for testing & evaluation ... these are not RTM ready products.
I'm quite enthusiastic about it so I hope to move fast come RTM I have one repository at the ready with a LUN to be formatted with ReFS v3
dellock6 wrote:
On the topic of data reduction, whatever type of job you are running, the savings are all going to be there, let me explain:
- any incremental backup during a day has almost unique data, so chances to be reduced by deduplication are lower. Dedupe will not help here just like ReFS will not help
I'm honestly wondering if deduplication will be needed AT ALL with this new technology, especially when you add to the discussion the "restore" performance. ReFS is not deduped, so there's nothing to re-hydrate during the restore.
Volumes R,P and V only carry incrementals (scale out repositories) and nearly "saves" us 170TB, so it will be interesting to see how this leverages against REFS
dellock6 wrote:Yes, you can leverage ReFS integration even for Cloud Connect repositories. Otherwise in my role why would I be so interested in this topic?
And so you know, we tested (thanks to Preben) ReFS integration even with encryption enabled, and it works. So you can receive encrypted backups from tenants and still leverage this new capability. I'm really excited too about the benefits that a service provider can gain using ReFS for Cloud Connect.
That's Awesome! But I can't see how this would work for Encrypted data though. I'm glad it does, I don't understand how though.
SBarrett847 wrote:That's Awesome! But I can't see how this would work for Encrypted data though. I'm glad it does, I don't understand how though.
It's the same reason making synthetic operations work when encryption is enabled. Remember, our encryption is transparent to ourselves; during transform operations Veeam datamover can correctly identify required blocks regardless of their encryption status (as our encryption is done block by block anyway, Veeam backups are not managed as a single monolithic binary stream) and "post" them into the new full backup file. But instead of actually writing the block to disk, we just tell ReFS to reference the existing block already present on the volume.
Luca
Luca Dell'Oca Principal EMEA Cloud Architect @ Veeam Software
Indeed, support for "deduplicating" encrypted backups is really the killer feature of our ReFS integration, and this is something neither external deduplicating storage nor Windows Server dedupe are able to do (and will never be able to). This is really a game changer, especially for Cloud Connect service providers.
I apologize for my noobishness - still trying to get my head wrapped around all of this and figuring out the best way to proceed...
We have a 2-node server 2016 hyper-v cluster with shared SAS primary storage and I've configured our b&r repository on a second shared SAS array. Source is a CSV which is ntfs-formatted (so we can use windows dedupe) and the repository array is REFSv3 formatted.
I am configuring on-host jobs so that the primary host can write directly from source to target but I'm trying to decide if we should be using per-vm backup files or not... We have a few exchange servers (different databases), a few file servers (different file stores), a few DCs (different domains) and a few skype for business servers and other miscellaneous small VMs.
Also trying to figure out if there is any reason to not do reverse incremental in this scenario...
I am also looking into using 9.5 with REFS on 2016, however I am not seeing any of the spaceless full backup advantages that have been highlighted.
I have the same job running in parelel on two servers. one is 2012 goig to a dedupe repository. The other is a 2016 server with several REFS volumes. The jobs run every 30 mins. one on the 2012 box and one on the 2016 box. The jobs is set to also do daily synthetic fulls. As if my understanding is correct I should see the see some advantages in keeping these synthetics on REFS as it's the same blocks for the most part in the full.
I am however not seeing any capacity savings at all. I do see improvement in the transform jobs but not in capacity. Also my space used on the volumes shows as larger than the size of the backup repository. Freshly formatted they show 150GB used space. Even accounting for the 150gb used space my space used exceeds the folder size +the 150GB. I am struggling to see why I am not getting the savings I would have aticipated???
The used space on the dedupe volume is 4.5GB space on disk, space used is 51GB so quite a saving. I would have also expected to see a reduction in the backup size due to the synthetics included as part of the job.
Can anybody who has implemented this give me any pointers. As its looking like a no case of sticking with dedupe as the capacity savings are greater than the benefits in transform time. If I was however getting the spaceless full advantages it would be beneficial.
Have i not configured something I should have?? Any help or advice appreciated?
Can someone clarify something - The REFS benefits have been mentioned when using synthetic full backup jobs.
Do we gain the space efficiencies on GFS Backup Copy jobs? Are these weekly, monthly, yearly points also considered synthetic full, and so reap the same benefits?