So when is Veeam going to support the new apiEfficiency
Data Deduplication available for ReFS
New Data Deduplication DataPort API for optimized ingress/egress
Space efficiency with ReFS Compaction
-
- Veteran
- Posts: 528
- Liked: 144 times
- Joined: Aug 20, 2015 9:30 pm
- Contact:
ReFS is getting dedup!
From https://blogs.windows.com/windowsexperi ... TgQXWHW.97
-
- Product Manager
- Posts: 8191
- Liked: 1322 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: ReFS is getting dedup!
Lol... When it is ready
Obviously we will look into this and look at the API's. But since this is a first preview, I guess it gives Veeam some time
Obviously we will look into this and look at the API's. But since this is a first preview, I guess it gives Veeam some time
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: ReFS is getting dedup!
I'd prefer first to see blockclone api become stable, before adding even more features to ReFS. I've seen the same with BTRFS in its infancy, they were tying to add more and more and the basecode took a long time before it became stable in its core. I hope MS will not follow the same path...
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Product Manager
- Posts: 8191
- Liked: 1322 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: ReFS is getting dedup!
@Luca,
I have been told (but not confirmed) that a lot of stability fixes are implemented. Obviously since it is not confirmed it will be wait and see first... But this is still a preview so I hope testing will be done (besides us) and feedback delivered to MSFT
I have been told (but not confirmed) that a lot of stability fixes are implemented. Obviously since it is not confirmed it will be wait and see first... But this is still a preview so I hope testing will be done (besides us) and feedback delivered to MSFT
-
- Veteran
- Posts: 528
- Liked: 144 times
- Joined: Aug 20, 2015 9:30 pm
- Contact:
Re: ReFS is getting dedup!
I'll definitely be running the preview build on some dev Hyper-V and SOFS hosts I have. Having a backup repository with the preview build to test ReFS stability might be a bit harder for me, thought maybe I can create a test VM as a repository and have an extra test backup copy job with GFS to try and trigger the ReFS issue. Probably not a good idea to have my primary backups stored on a preview build
-
- Product Manager
- Posts: 8191
- Liked: 1322 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: ReFS is getting dedup!
I would agree not to store your primary backups on there
-
- Veeam Software
- Posts: 39
- Liked: 21 times
- Joined: May 17, 2010 6:49 pm
- Full Name: Rustam
- Location: hockey night in canada
- Contact:
Re: ReFS is getting dedup!
buckle up for a rough ride?
-
- Expert
- Posts: 214
- Liked: 61 times
- Joined: Feb 18, 2013 10:45 am
- Full Name: Stan G
- Contact:
Re: ReFS is getting dedup!
I really can't see how the block cloning and dedupe feature will work nicely together.
-
- Service Provider
- Posts: 315
- Liked: 41 times
- Joined: Feb 02, 2016 5:02 pm
- Full Name: Stephen Barrett
- Contact:
Re: ReFS is getting dedup!
Thinking about it I can't see why it wouldn't, in fact I can see even greater savings for GFS cloning. Time will tell.
Also listed is RefS Compaction - I wonder what that is?
Also listed is RefS Compaction - I wonder what that is?
-
- Veteran
- Posts: 465
- Liked: 136 times
- Joined: Jul 16, 2015 1:31 pm
- Full Name: Marc K
- Contact:
Re: ReFS is getting dedup!
It's not mentioned in Microsoft's blog post, but Gostev's community digest email states that dedup on ReFS will be an implementation of the engine that is currently used with NTFS. I'm saddened to to hear that. I was hoping for a dedup engine based around block cloning. This type of engine wouldn't be able to offer compression, but it would eliminate rehydration penalties and the need for garbage collection. Garbage collection was the reason we turned off dedup on our file server. Dedup would run and then the next Veeam incremental would be huge. Our repository could not sustain taking in so many large incrementals.
Who is online
Users browsing this forum: Bing [Bot], epaape, ottl05, Semrush [Bot], sk_!1967! and 130 guests