Comprehensive data protection for all workloads
Post Reply
nmdange
Veteran
Posts: 528
Liked: 144 times
Joined: Aug 20, 2015 9:30 pm
Contact:

ReFS is getting dedup!

Post by nmdange »

From https://blogs.windows.com/windowsexperi ... TgQXWHW.97
Efficiency
Data Deduplication available for ReFS
New Data Deduplication DataPort API for optimized ingress/egress
Space efficiency with ReFS Compaction
So when is Veeam going to support the new api :lol:
Mike Resseler
Product Manager
Posts: 8191
Liked: 1322 times
Joined: Feb 08, 2013 3:08 pm
Full Name: Mike Resseler
Location: Belgium
Contact:

Re: ReFS is getting dedup!

Post by Mike Resseler »

Lol... When it is ready :-D

Obviously we will look into this and look at the API's. But since this is a first preview, I guess it gives Veeam some time ;-)
dellock6
VeeaMVP
Posts: 6165
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: ReFS is getting dedup!

Post by dellock6 »

I'd prefer first to see blockclone api become stable, before adding even more features to ReFS. I've seen the same with BTRFS in its infancy, they were tying to add more and more and the basecode took a long time before it became stable in its core. I hope MS will not follow the same path...
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Mike Resseler
Product Manager
Posts: 8191
Liked: 1322 times
Joined: Feb 08, 2013 3:08 pm
Full Name: Mike Resseler
Location: Belgium
Contact:

Re: ReFS is getting dedup!

Post by Mike Resseler »

@Luca,

I have been told (but not confirmed) that a lot of stability fixes are implemented. Obviously since it is not confirmed it will be wait and see first... But this is still a preview so I hope testing will be done (besides us) and feedback delivered to MSFT
nmdange
Veteran
Posts: 528
Liked: 144 times
Joined: Aug 20, 2015 9:30 pm
Contact:

Re: ReFS is getting dedup!

Post by nmdange »

I'll definitely be running the preview build on some dev Hyper-V and SOFS hosts I have. Having a backup repository with the preview build to test ReFS stability might be a bit harder for me, thought maybe I can create a test VM as a repository and have an extra test backup copy job with GFS to try and trigger the ReFS issue. Probably not a good idea to have my primary backups stored on a preview build :)
Mike Resseler
Product Manager
Posts: 8191
Liked: 1322 times
Joined: Feb 08, 2013 3:08 pm
Full Name: Mike Resseler
Location: Belgium
Contact:

Re: ReFS is getting dedup!

Post by Mike Resseler »

I would agree not to store your primary backups on there ;-)
rkovhaev
Veeam Software
Posts: 39
Liked: 21 times
Joined: May 17, 2010 6:49 pm
Full Name: Rustam
Location: hockey night in canada
Contact:

Re: ReFS is getting dedup!

Post by rkovhaev » 1 person likes this post

buckle up for a rough ride?
ITP-Stan
Expert
Posts: 214
Liked: 61 times
Joined: Feb 18, 2013 10:45 am
Full Name: Stan G
Contact:

Re: ReFS is getting dedup!

Post by ITP-Stan »

I really can't see how the block cloning and dedupe feature will work nicely together.
SBarrett847
Service Provider
Posts: 315
Liked: 41 times
Joined: Feb 02, 2016 5:02 pm
Full Name: Stephen Barrett
Contact:

Re: ReFS is getting dedup!

Post by SBarrett847 »

Thinking about it I can't see why it wouldn't, in fact I can see even greater savings for GFS cloning. Time will tell.

Also listed is RefS Compaction - I wonder what that is?
mkaec
Veteran
Posts: 465
Liked: 136 times
Joined: Jul 16, 2015 1:31 pm
Full Name: Marc K
Contact:

Re: ReFS is getting dedup!

Post by mkaec » 2 people like this post

It's not mentioned in Microsoft's blog post, but Gostev's community digest email states that dedup on ReFS will be an implementation of the engine that is currently used with NTFS. I'm saddened to to hear that. I was hoping for a dedup engine based around block cloning. This type of engine wouldn't be able to offer compression, but it would eliminate rehydration penalties and the need for garbage collection. Garbage collection was the reason we turned off dedup on our file server. Dedup would run and then the next Veeam incremental would be huge. Our repository could not sustain taking in so many large incrementals.
Post Reply

Who is online

Users browsing this forum: Antra, Bing [Bot], Google [Bot], Majestic-12 [Bot], oscarm and 138 guests