Comprehensive data protection for all workloads
Gostev
SVP, Product Management
Posts: 23844
Liked: 3206 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: ReFS 3.0 and Dedup

Post by Gostev » Feb 16, 2019 2:20 am 1 person likes this post

Hi Mark, this was the last minute development and we did not have any chance to perform effectiveness testing (only reliability testing), which is why the feature is declared as experimental. Also, the idea was to hear the effectiveness results from the community, as lab environment don't have real-world data anyway. Thanks!

DonZoomik
Enthusiast
Posts: 94
Liked: 25 times
Joined: Nov 25, 2016 1:56 pm
Contact:

Re: ReFS 3.0 and Dedup

Post by DonZoomik » Feb 19, 2019 7:46 am 1 person likes this post

Case resolved by support.
Slowness has be reproduced in lab and it's as slow as with NTFS deduplication.

absentminded
Lurker
Posts: 2
Liked: never
Joined: Mar 13, 2019 12:11 pm
Contact:

Re: ReFS 3.0 and Dedup

Post by absentminded » Mar 13, 2019 12:14 pm

Our fast clone synthetic takes hours. How was your case resolved?

DonZoomik
Enthusiast
Posts: 94
Liked: 25 times
Joined: Nov 25, 2016 1:56 pm
Contact:

Re: ReFS 3.0 and Dedup

Post by DonZoomik » Mar 13, 2019 1:38 pm

It's wasn't resolved per se, just replicated in lab and confirmed to work... badly.
Summary is that all block clone benefits are lost if file is deduplicated. ReFS deduplication has no real integration with block clone and metadata operations become full read and write operations (copied from dedup store to new blocks). Also, written data is no longer deduplicated.
TL;DR: fast clone is slow with ReFS Deduplication.

jorgedlcruz
Veeam Software
Posts: 214
Liked: 94 times
Joined: Jul 17, 2015 6:54 pm
Full Name: Jorge de la Cruz Mingo
Contact:

Re: ReFS 3.0 and Dedup

Post by jorgedlcruz » Mar 13, 2019 1:50 pm 1 person likes this post

Hi guys,
But that will be expected I will say isn't? To obtain the best possible performance, at a logical level really, you will need to use the usual ReFS for your synthetic, might be weekly or so, and then once you know your synthetic file is created, that is the file you want to apply dedupe.

So, look at your Windows Deduplication schedule and make sure you are not doing dedupe on the files of the open chain which will be used to create the synthetic, as if you do the server will need to dedupe all that files before do the ReFS, which will end in that long time to do the fast-clone.

I see this ReFS 3.0 and Dedupe kind of what Exagrid does, you have a landing zone for your weekly, where you run your synthetic operations, and then after you apply your dedupication to the chains which are closed already. Of course all of this is DIY vs. ExaGrid which gives you all of this and more out of the box.

absentminded
Lurker
Posts: 2
Liked: never
Joined: Mar 13, 2019 12:11 pm
Contact:

Re: ReFS 3.0 and Dedup

Post by absentminded » Mar 13, 2019 2:23 pm

I confirm we do dedup files older than one day. Have to try eight days.

Post Reply

Who is online

Users browsing this forum: ChrisTong, Google [Bot] and 53 guests