-
- Service Provider
- Posts: 17
- Liked: 1 time
- Joined: Apr 09, 2014 9:11 am
- Full Name: Allan Kjaer
- Contact:
Server 2019 ReFS and Windows Deduplication
In the what's new document for Veeam B&R 9.5 Update 4 is says:
Added experimental support for block cloning on deduplicated files for Windows Server 2019 ReFS. To enable this functionality,
create ReFSDedupeBlockClone (DWORD) value under HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication
registry key on the backup server.
But what value should the reg (DWORD) have?
Added experimental support for block cloning on deduplicated files for Windows Server 2019 ReFS. To enable this functionality,
create ReFSDedupeBlockClone (DWORD) value under HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication
registry key on the backup server.
But what value should the reg (DWORD) have?
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Jan 31, 2019 5:44 pm
- Full Name: Karl Hague
- Contact:
Re: ReFS 3.0 and Dedup
And this is set on each Repo server which has a deduped ReFS volume or only the primary backup server?
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: ReFS 3.0 and Dedup
This is only on the backup server.
-
- Service Provider
- Posts: 35
- Liked: 6 times
- Joined: Jan 31, 2018 9:31 am
- Full Name: Julien Rick
- Location: Luxembourg
- Contact:
Re: ReFS 3.0 and Dedup
I'm currently testing ReFS with Dedup on server 2019, I have added the ReFSDedupeBlockClone DWORD value to my backup server (Win server 2016).
My test job is configured to create a daily synth full but don't use the block clone feature.
An idea why?
My test job is configured to create a daily synth full but don't use the block clone feature.
An idea why?
-
- Enthusiast
- Posts: 95
- Liked: 31 times
- Joined: Mar 07, 2018 12:57 pm
- Contact:
Re: ReFS 3.0 and Dedup
Probably because it requires Server 2019?
-
- Product Manager
- Posts: 14837
- Liked: 3083 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: ReFS 3.0 and Dedup
correct, that key is only working on Windows 2019
-
- Service Provider
- Posts: 372
- Liked: 120 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
Re: ReFS 3.0 and Dedup
On WS2019 I saw it already working on one very short chain. Wasn't particularly fast but there's a lot of IO load going on.
-
- Service Provider
- Posts: 372
- Liked: 120 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
Re: ReFS 3.0 and Dedup
I hit my first compact and it seems that fast clone is not fast on deduplicated ReFS. It's seems to just copy (not clone - free space on volume is reducing about as fast) blocks to new file and target file is not deduplicated (size on disk = size).
Currently I'd say that it's no faster (if not slower) than NTFS deduplication-based compact.
Currently I'd say that it's no faster (if not slower) than NTFS deduplication-based compact.
-
- Service Provider
- Posts: 372
- Liked: 120 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
Re: ReFS 3.0 and Dedup
I hit a bigger compact and it's just as slow as with NTFS deduplication, about 6 hours to compact ~2,5TB of VBKs.
Is this expected? ReFS volume was created on WS2019, chain was restarted using active full on WS2019 on ReFS (eg all data has always been on this ReFS volume) and job log shows that "[fast clone]" is being utilized.
Is this expected? ReFS volume was created on WS2019, chain was restarted using active full on WS2019 on ReFS (eg all data has always been on this ReFS volume) and job log shows that "[fast clone]" is being utilized.
-
- Product Manager
- Posts: 14837
- Liked: 3083 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: ReFS 3.0 and Dedup
Hello,
if the job log says that "fast clone" is used, then it seems unexpected to me. Could you please open a support case and share the case number here?
Thanks,
Hannes
if the job log says that "fast clone" is used, then it seems unexpected to me. Could you please open a support case and share the case number here?
Thanks,
Hannes
-
- Service Provider
- Posts: 372
- Liked: 120 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
Re: ReFS 3.0 and Dedup
#03408303
-
- Influencer
- Posts: 15
- Liked: 4 times
- Joined: Jan 06, 2016 10:26 am
- Full Name: John P. Forsythe
- Contact:
Re: ReFS 3.0 and Dedup
Hi.
What is the file size limit for ReFS + DeDup at Server 2019?
Thank you!
What is the file size limit for ReFS + DeDup at Server 2019?
Thank you!
-
- Service Provider
- Posts: 372
- Liked: 120 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
Re: ReFS 3.0 and Dedup
Only first 4TB of a file is processed by deduplication but there is no hard file size limit. 64TB volume limit places some practical limits though.
-
- Enthusiast
- Posts: 57
- Liked: 8 times
- Joined: Jul 13, 2009 12:50 pm
- Full Name: Mark
- Location: The Netherlands
- Contact:
Re: ReFS 3.0 and Dedup
What is to be expected with ReFS 3.0(with dedup) on Windows 2019 compared to ReFS 2016/2019 without dedup in a case where you use per-VM backups and everything fast-clone/Syntectic full's?
As I expect offline duplication now also is able to dedup the whole volume instead of only the backup chain(fast clone), but does someone know how effective it really is in this scenario? Veeam: some lab test you are willing to share?
As I expect offline duplication now also is able to dedup the whole volume instead of only the backup chain(fast clone), but does someone know how effective it really is in this scenario? Veeam: some lab test you are willing to share?
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: ReFS 3.0 and Dedup
Hi Mark, this was the last minute development and we did not have any chance to perform effectiveness testing (only reliability testing), which is why the feature is declared as experimental. Also, the idea was to hear the effectiveness results from the community, as lab environment don't have real-world data anyway. Thanks!
-
- Service Provider
- Posts: 372
- Liked: 120 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
Re: ReFS 3.0 and Dedup
Case resolved by support.
Slowness has be reproduced in lab and it's as slow as with NTFS deduplication.
Slowness has be reproduced in lab and it's as slow as with NTFS deduplication.
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Mar 13, 2019 12:11 pm
- Contact:
Re: ReFS 3.0 and Dedup
Our fast clone synthetic takes hours. How was your case resolved?
-
- Service Provider
- Posts: 372
- Liked: 120 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
Re: ReFS 3.0 and Dedup
It's wasn't resolved per se, just replicated in lab and confirmed to work... badly.
Summary is that all block clone benefits are lost if file is deduplicated. ReFS deduplication has no real integration with block clone and metadata operations become full read and write operations (copied from dedup store to new blocks). Also, written data is no longer deduplicated.
TL;DR: fast clone is slow with ReFS Deduplication.
Summary is that all block clone benefits are lost if file is deduplicated. ReFS deduplication has no real integration with block clone and metadata operations become full read and write operations (copied from dedup store to new blocks). Also, written data is no longer deduplicated.
TL;DR: fast clone is slow with ReFS Deduplication.
-
- Veeam Software
- Posts: 1493
- Liked: 655 times
- Joined: Jul 17, 2015 6:54 pm
- Full Name: Jorge de la Cruz
- Contact:
Re: ReFS 3.0 and Dedup
Hi guys,
But that will be expected I will say isn't? To obtain the best possible performance, at a logical level really, you will need to use the usual ReFS for your synthetic, might be weekly or so, and then once you know your synthetic file is created, that is the file you want to apply dedupe.
So, look at your Windows Deduplication schedule and make sure you are not doing dedupe on the files of the open chain which will be used to create the synthetic, as if you do the server will need to dedupe all that files before do the ReFS, which will end in that long time to do the fast-clone.
I see this ReFS 3.0 and Dedupe kind of what Exagrid does, you have a landing zone for your weekly, where you run your synthetic operations, and then after you apply your dedupication to the chains which are closed already. Of course all of this is DIY vs. ExaGrid which gives you all of this and more out of the box.
But that will be expected I will say isn't? To obtain the best possible performance, at a logical level really, you will need to use the usual ReFS for your synthetic, might be weekly or so, and then once you know your synthetic file is created, that is the file you want to apply dedupe.
So, look at your Windows Deduplication schedule and make sure you are not doing dedupe on the files of the open chain which will be used to create the synthetic, as if you do the server will need to dedupe all that files before do the ReFS, which will end in that long time to do the fast-clone.
I see this ReFS 3.0 and Dedupe kind of what Exagrid does, you have a landing zone for your weekly, where you run your synthetic operations, and then after you apply your dedupication to the chains which are closed already. Of course all of this is DIY vs. ExaGrid which gives you all of this and more out of the box.
Jorge de la Cruz
Senior Product Manager | Veeam ONE @ Veeam Software
@jorgedlcruz
https://www.jorgedelacruz.es / https://jorgedelacruz.uk
vExpert 2014-2024 / InfluxAce / Grafana Champion
Senior Product Manager | Veeam ONE @ Veeam Software
@jorgedlcruz
https://www.jorgedelacruz.es / https://jorgedelacruz.uk
vExpert 2014-2024 / InfluxAce / Grafana Champion
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Mar 13, 2019 12:11 pm
- Contact:
Re: ReFS 3.0 and Dedup
I confirm we do dedup files older than one day. Have to try eight days.
-
- Service Provider
- Posts: 1092
- Liked: 134 times
- Joined: May 14, 2013 8:35 pm
- Full Name: Frank Iversen
- Location: Norway
- Contact:
Re: ReFS 3.0 and Dedup
soo... I should stick with server 2019 and Refs and not enabling deduplication?
-
- Veeam ProPartner
- Posts: 565
- Liked: 103 times
- Joined: Dec 29, 2009 12:48 pm
- Full Name: Marco Novelli
- Location: Asti - Italy
- Contact:
Re: ReFS 3.0 and Dedup
Hi guys, slight off topic... does Veeam support restore of deduplicated files stored on a source VM Windows Server 2019 , ReFS 3.0 volume?
Do I need to have Veeam installed also on box with Windows Server 2019 with "Data Deduplication" feature installed?
Thanks,
Marco
Do I need to have Veeam installed also on box with Windows Server 2019 with "Data Deduplication" feature installed?
Thanks,
Marco
-
- Service Provider
- Posts: 372
- Liked: 120 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
Re: ReFS 3.0 and Dedup
Mount server must have Windows Server 2019 and data deduplication feature installed.
-
- Service Provider
- Posts: 372
- Liked: 120 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
Re: ReFS 3.0 and Dedup
While I was running it, merge seemed a bit more stable than with NTFS deduplication - no other benefits. With NTFS, I got occasional transient errors during merge (probably due to file locking from deduplication engine).
-
- Expert
- Posts: 193
- Liked: 47 times
- Joined: Jan 16, 2018 5:14 pm
- Full Name: Harvey Carel
- Contact:
Re: ReFS 3.0 and Dedup
Hi Veeam,
Does this key work if the Windows 2019 ReFS Volume is presented as CIFS? Or does Veeam need to have the volume added as a Windows Server?
Does this key work if the Windows 2019 ReFS Volume is presented as CIFS? Or does Veeam need to have the volume added as a Windows Server?
-
- Product Manager
- Posts: 14837
- Liked: 3083 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: ReFS 3.0 and Dedup
Hello,
yes it does fastclone via SMB / CIFS on a REFS dedupe filesystem. I just tested it in my lab.
Anyway, I'm not sure whether it makes any sense. SMB is the slowest way for backups and the comments say that ReFS dedupe & fastclone is also slow
Best regards,
Hannes
yes it does fastclone via SMB / CIFS on a REFS dedupe filesystem. I just tested it in my lab.
Anyway, I'm not sure whether it makes any sense. SMB is the slowest way for backups and the comments say that ReFS dedupe & fastclone is also slow
Best regards,
Hannes
-
- Novice
- Posts: 9
- Liked: never
- Joined: Mar 12, 2012 4:32 pm
- Full Name: Johan Segernäs
- Contact:
Re: ReFS 3.0 and Dedup
As I understand the reg-key is to be set at the backup server, not the backup repo?
My case is that I have a Win2016 backup server and the repo server is a 2019 with ReFS.
Will that work?
My case is that I have a Win2016 backup server and the repo server is a 2019 with ReFS.
Will that work?
-
- Product Manager
- Posts: 14837
- Liked: 3083 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: ReFS 3.0 and Dedup
that's what the document says. So if it does not work, please open a support case and post the case number here for reference.
-
- Service Provider
- Posts: 14
- Liked: never
- Joined: Aug 11, 2017 9:09 pm
- Full Name: Matt Burnette
- Contact:
Re: ReFS 3.0 and Dedup
Has anyone had any issues so far?
I thought about enabling it, though it looks like the only place that talks about block cloning with data dedup references Veeam.
I also haven't found where this is supported by Microsoft.
I also am unsure of the exact mechanisms that might be affected by having both on at the same time.
Does anyone have more information on how these can or can't be enabled at the same time?
I feel it could be a bad idea to have both at the same time since both are chunking data and replacing it with metadata/pointers.
How is it possible that these do not conflict with one another?
Is there any Veeam guidance on how they are testing it or what the best practices might be to enable this?
I thought about enabling it, though it looks like the only place that talks about block cloning with data dedup references Veeam.
I also haven't found where this is supported by Microsoft.
I also am unsure of the exact mechanisms that might be affected by having both on at the same time.
Does anyone have more information on how these can or can't be enabled at the same time?
I feel it could be a bad idea to have both at the same time since both are chunking data and replacing it with metadata/pointers.
How is it possible that these do not conflict with one another?
Is there any Veeam guidance on how they are testing it or what the best practices might be to enable this?
Who is online
Users browsing this forum: Google [Bot], Gostev, Henrik.Grevelund, lyya, michele.berardo, Semrush [Bot], Zimenka and 170 guests