dellock6 wrote:On the topic of data reduction, whatever type of job you are running, the savings are all going to be there, let me explain:
- any incremental backup during a day has almost unique data, so chances to be reduced by deduplication are lower. Dedupe will not help here just like ReFS will not help
I'm honestly wondering if deduplication will be needed AT ALL with this new technology, especially when you add to the discussion the "restore" performance. ReFS is not deduped, so there's nothing to re-hydrate during the restore.
PS C:\Windows\system32> get-dedupstatus
FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
--------- ---------- -------------- ------------- ------
51.79 TB 68.14 TB 357 357 O:
46.54 TB 87.42 TB 6996 6996 R:
52.17 TB 24.88 TB 9 9 S:
49.32 TB 55.01 TB 325 356 Q:
51.47 TB 43.11 TB 4096 4096 P:
42.6 TB 51.9 TB 163 163 T:
57.46 TB 34.06 TB 3326 3326 V:
dellock6 wrote:Yes, you can leverage ReFS integration even for Cloud Connect repositories. Otherwise in my role why would I be so interested in this topic?
And so you know, we tested (thanks to Preben) ReFS integration even with encryption enabled, and it works. So you can receive encrypted backups from tenants and still leverage this new capability. I'm really excited too about the benefits that a service provider can gain using ReFS for Cloud Connect.
does it also work for Endpoint Jobs and Endpoit Copy Jobs when target is a B&R Repo with REFS?
SBarrett847 wrote:That's Awesome! But I can't see how this would work for Encrypted data though. I'm glad it does, I don't understand how though.
barryCairns wrote:Also my space used on the volumes shows as larger than the size of the backup repository.
Users browsing this forum: Bing [Bot], Google [Bot] and 28 guests