Efficiency
Data Deduplication available for ReFS
New Data Deduplication DataPort API for optimized ingress/egress
Space efficiency with ReFS Compaction
I'd prefer first to see blockclone api become stable, before adding even more features to ReFS. I've seen the same with BTRFS in its infancy, they were tying to add more and more and the basecode took a long time before it became stable in its core. I hope MS will not follow the same path...
Luca Dell'Oca Principal EMEA Cloud Architect @ Veeam Software
I have been told (but not confirmed) that a lot of stability fixes are implemented. Obviously since it is not confirmed it will be wait and see first... But this is still a preview so I hope testing will be done (besides us) and feedback delivered to MSFT
I'll definitely be running the preview build on some dev Hyper-V and SOFS hosts I have. Having a backup repository with the preview build to test ReFS stability might be a bit harder for me, thought maybe I can create a test VM as a repository and have an extra test backup copy job with GFS to try and trigger the ReFS issue. Probably not a good idea to have my primary backups stored on a preview build
It's not mentioned in Microsoft's blog post, but Gostev's community digest email states that dedup on ReFS will be an implementation of the engine that is currently used with NTFS. I'm saddened to to hear that. I was hoping for a dedup engine based around block cloning. This type of engine wouldn't be able to offer compression, but it would eliminate rehydration penalties and the need for garbage collection. Garbage collection was the reason we turned off dedup on our file server. Dedup would run and then the next Veeam incremental would be huge. Our repository could not sustain taking in so many large incrementals.