Comprehensive data protection for all workloads
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: ReFS 3.0 and Dedup

Post by Gostev »

matt.burnette wrote: Jun 25, 2019 7:03 pmHow is it possible that these do not conflict with one another
Well, all we know for sure is that Microsoft has made the specific changes in ReFS in Server 2019 to make block cloning compatible with ReFS deduplication. If you have any concerns about the design or supportability of this functionality by Microsoft, it is best to place a query directly with them. Because not having access to the design documentation or source code, it is naturally impossible for anyone here to explain how exactly these technologies are able to work along one another.
matt.burnette wrote: Jun 25, 2019 7:03 pmIs there any Veeam guidance on how they are testing it or what the best practices might be to enable this?
There's no guidance from Veeam yet. This is why we called it "experimental support", where we just put the new feature that is dependent on a 3rd party technology out there in hands of enthusiasts, specifically to develop best practices based on real-world environments and experiences.
oscaru
Service Provider
Posts: 27
Liked: 11 times
Joined: Jul 26, 2016 6:49 pm
Full Name: Oscar Suarez
Contact:

Re: ReFS 3.0 and Dedup

Post by oscaru »

sege wrote: May 29, 2019 8:00 am As I understand the reg-key is to be set at the backup server, not the backup repo?
My case is that I have a Win2016 backup server and the repo server is a 2019 with ReFS.

Will that work?
I have the same case, and it doesn't seem to be working. Synthetics don't do fast clone, and takes forever on transformation.
ferrus
Veeam ProPartner
Posts: 300
Liked: 44 times
Joined: Dec 03, 2015 3:41 pm
Location: UK
Contact:

Re: ReFS 3.0 and Dedup

Post by ferrus »

Can anyone at Veeam, or end users who have tried it recently - give an update on this?
I'm considering migrating our Veeam repositories to 2019, and was wondering about the state of deduplication on ReFS, and fact clone.

There appear to be several threads on here, some with positive reports, some negative.
DonZoomik
Service Provider
Posts: 372
Liked: 120 times
Joined: Nov 25, 2016 1:56 pm
Full Name: Mihkel Soomere
Contact:

Re: ReFS 3.0 and Dedup

Post by DonZoomik »

It's the same. I have one server with ReFS deduplication and it actually seems slower than NTFS due to some ReFS integration specifics (at least while using forever incremental).
karsayor
Novice
Posts: 4
Liked: never
Joined: Jun 06, 2014 10:21 am
Full Name: Philippe Marro
Contact:

Re: ReFS 3.0 and Dedup

Post by karsayor »

I am also very interested by this combination of features, any further testing has been done about it ?
robertpet
Lurker
Posts: 2
Liked: 1 time
Joined: Jun 18, 2019 9:16 am
Contact:

Re: ReFS 3.0 and Dedup

Post by robertpet » 1 person likes this post

Running ReFS + deduplication + fastclone results in really slow fastclone unless you modify your optimization job to run after the synthetic full is created.
jorgedlcruz wrote: Mar 13, 2019 1:50 pm Hi guys,
But that will be expected I will say isn't? To obtain the best possible performance, at a logical level really, you will need to use the usual ReFS for your synthetic, might be weekly or so, and then once you know your synthetic file is created, that is the file you want to apply dedupe.

So, look at your Windows Deduplication schedule and make sure you are not doing dedupe on the files of the open chain which will be used to create the synthetic, as if you do the server will need to dedupe all that files before do the ReFS, which will end in that long time to do the fast-clone.
We have multiple repositories, most of them with ReFS + deduplication. Synthetic Fulls take days to complete. Space savings are great, savings around 100-150% (Capacity 40TB, Used Space 79,4TB; Capacity 64TB, Used Space 165TB)

The ones that are not deduplicated runs much faster and space savings are around 40%

So my recommendation is to stick use ReFS without enabling windows deduplication if you want your synthetic fulls to complete. If you have a schedule where you only make backups on weekdays, you could probably enable deduplication and run the synthetic full on fridays, and hopefully it will be done until monday
karsayor
Novice
Posts: 4
Liked: never
Joined: Jun 06, 2014 10:21 am
Full Name: Philippe Marro
Contact:

Re: ReFS 3.0 and Dedup

Post by karsayor »

thanks robertpet for the feedback. Did you try activating experimental support for block cloning on deduplicated files for Windows Server 2019 ReFS ? The ReFSDedupeBlockClone (DWORD) registry value under HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication
robertpet
Lurker
Posts: 2
Liked: 1 time
Joined: Jun 18, 2019 9:16 am
Contact:

Re: ReFS 3.0 and Dedup

Post by robertpet »

Yes I did, that is a requirement for fastclone to be enabled on ReFS
dejan.ilic
Enthusiast
Posts: 37
Liked: 1 time
Joined: Apr 11, 2019 11:37 am
Full Name: Dejan Ilic
Contact:

Re: ReFS 3.0 and Dedup

Post by dejan.ilic »

Yes, the ReFSDedupeBlockClone registry value is set on the B&R server but not on the repositories and we see "fast clone" message when synthetic full are run.
Does the registry value need to be set on all the servers?
HannesK
Product Manager
Posts: 14844
Liked: 3086 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: ReFS 3.0 and Dedup

Post by HannesK »

to answer your question: please read the 3rd answer veeam-backup-replication-f2/refs-3-0-an ... ml#p312844

It seems everything works fine as you can see "fast clone" message.
mapapo
Lurker
Posts: 1
Liked: never
Joined: Jun 16, 2015 6:34 am
Full Name: Martin Posch
Contact:

Re: ReFS 3.0 and Dedup

Post by mapapo »

Hello,

Any news on this ?
We have REFS with dedupe and the Synthetic Fulls take 2 days to complete of a 7Tb Backup
(And the message in the logs is fast-clone ;) )

Will there be changes with Version 10 in regards to this ?
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: ReFS 3.0 and Dedup

Post by Gostev »

Not in v10 itself, but we have a plan to try something down the road to see if it improves the situation.

[EDIT] Update from down the road: it did not help.
smannix
Novice
Posts: 4
Liked: 1 time
Joined: Mar 25, 2019 6:00 pm
Full Name: Steve Mannix
Contact:

Re: ReFS 3.0 and Dedup

Post by smannix » 1 person likes this post

mapapo wrote: Jan 14, 2020 7:01 am We have REFS with dedupe and the Synthetic Fulls take 2 days to complete of a 7Tb Backup
(And the message in the logs is fast-clone ;) )
We got around this by setting the dedupe config to not process a file newer than 8 days and we adjusted the time frame that dedupe can run to a range when the backups weren't running.
This way the Sytenthetic Full creation processes against incremental that haven't been deduped.
lastangel
Lurker
Posts: 1
Liked: never
Joined: Jan 20, 2016 11:03 am
Contact:

Re: ReFS 3.0 and Dedup

Post by lastangel »

HI
i have this configuration
backup server windows server 2019 (desktop experience) version standard build 17763 with key enable 8and set to 1)
proxy server windows 2012 core
repository server windows server 2019 version 18363 core version
have ReFs with deduplication on repository server

when use syntetich not have fast clone, how i can investigate it?
HannesK
Product Manager
Posts: 14844
Liked: 3086 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: ReFS 3.0 and Dedup

Post by HannesK »

Hello,
and welcome to the forums.

https://helpcenter.veeam.com/docs/backu ... l?ver=95u4

Was REFS detected by Veeam (meaning the align backup files... is set)?
I suggest to check whether you hit one of the limitations mentioned.
I assume that you have "new" backups on REFS (meaning they were not copied from a different repository)

Best regards,
Hannes
sandroalves
Expert
Posts: 131
Liked: 4 times
Joined: Mar 15, 2020 3:56 pm
Full Name: Sandro da Silva Alves
Contact:

Re: ReFS 3.0 and Dedup

Post by sandroalves »

Hi,

I have an infinite incremental backup job with 7 days of protection that are stored on a volume (ReFS - Block 64k) with Veem's native (inline) deduplication.

I created a backup copy task to store only the (1) weekly and (3) monthly (GFS) data on another volume (ReFS - Block 64k).

I followed this recommendation for setting up deduplication on the volume where I store weekly and monthly data, but I still don't see deduplication results.

https://www.veeam.com/blog/data-dedupli ... veeam.html

I have a total of 900GB written in that volume.

I ran one (Start-Dedupjob -Type Optimization -Volume E: -Priority High), but I didn't see any deduplication value.

What am I doing wrong?

- Could it be the block size?
- Could it be the number of days configured in the deduplication?
- Could it be that the data stored on the volume cannot be deduplicated?

Thanks.

Image
Image
kjo@deif.com
Influencer
Posts: 13
Liked: 1 time
Joined: Feb 21, 2019 4:00 pm
Full Name: Kim Johansen
Contact:

Re: ReFS 3.0 and Dedup

Post by kjo@deif.com »

sandroalves how many days of data do you have? From your screenshots you can see that it will only dedup data the is 3 days old.

Please be aware that there will be a huge performance hit to restore times as data is fragmented to oblivion when it is deduped, and if your synthetic fulls will performance will be like normal with block cloning.
kjo@deif.com
Influencer
Posts: 13
Liked: 1 time
Joined: Feb 21, 2019 4:00 pm
Full Name: Kim Johansen
Contact:

Re: ReFS 3.0 and Dedup

Post by kjo@deif.com »

Has anyone here tried setting up a local object storage and deduplicating that? This seems to solve our dedup problems.
- Only changes are uploaded (performance)
- Files are split into blocks (max file size)

Restore will of course still be slow because of fragmentation, it requires a scale-out repository and additional setup.

I have been experimenting with MinIO and I think we might try using that.

Ps. My last reply was too fast and contained some spelling mistakes, the proper wording should be "Please be aware that there will be a huge performance hit to restore times as data is fragmented into oblivion when it is deduped, and your synthetic fulls will perform like normal synthetics without block cloning."
gummett
Veteran
Posts: 405
Liked: 106 times
Joined: Jan 30, 2017 9:23 am
Full Name: Ed Gummett
Location: Manchester, United Kingdom
Contact:

Re: ReFS 3.0 and Dedup

Post by gummett »

@sandroalves Bear in mind that you probably don't want to dedupe the 'active' backup chain as it will impact performance. In other words it's better to dedupe only the weekly/monthly/yearly GFS points, which you'd do by setting the 'deduplicate files older than (in days)' to greater than the number of daily retention points on the Backup Copy Job.

Also this is why ReFS block clone and dedupe don't perform well together, per the Program Manager on the Storage and File Systems Team at Microsoft:
Andrew@MSFT wrote: Mar 19, 2020 7:12 pm This is indeed dedup overhead. In short, if you try to clone a deduped file, dedup will inline-rehydate the file before forwarding the cloning api. This can be expensive, and slow depending on the system...

Also, deduping a cloned file, would increase storage footprint, at least initially, till all the clones have been dehydrated by dedup and their chunks deduplicated.
Ed Gummett (VMCA)
Senior Specialist Solutions Architect, Storage Technologies, AWS
(Senior Systems Engineer, Veeam Software, 2018-2021)
MrSpock
Service Provider
Posts: 49
Liked: 3 times
Joined: Apr 24, 2009 10:16 pm
Contact:

Re: ReFS 3.0 and Dedup

Post by MrSpock »

I have a few questions that I hope someone can bring clarity to.
  • What is the status for ReFS + Deduplication in Veeam Backup 10? Any improvements compared to 9.5 U4?
  • Is the ReFSDedupeBlockClone registry key still needed to be able to use Fast Clone in ReFS repositories on Windows 2019 and Deduplication enabled?
My plan is to set up a new ReFS repository and make use of the Fast Clone capability to make the GFS creation faster and without using unnecessary disk space. I also have a large amount of old GFS restore points that I want to keep in the same repository. That will only be possible if those restore points are deduplicated. I understand that I have to configure the optimization job so it does not interfere with the active backup chain as that would remove the gains from Fast Clone.
  • Will this work as I intend or do I have to use two separate repositories, one with deduplication and one without?
HannesK
Product Manager
Posts: 14844
Liked: 3086 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: ReFS 3.0 and Dedup

Post by HannesK »

nothing changed from a Veeam perspective. Both features are on the Microsoft side.

The reg key stays, as the combination of block-cloning and deduplication rarely makes sense.

For your scenario I would go with two repositories. Everything else is just handicraft work.
MrSpock
Service Provider
Posts: 49
Liked: 3 times
Joined: Apr 24, 2009 10:16 pm
Contact:

Re: ReFS 3.0 and Dedup

Post by MrSpock »

Thank you, Hannes.

I think it does make sense in my case as "new and block-cloned" GFS restore points will grow in total size over time, while "old and deduplicated" GFS restore points will shrink in total size when they are shifted out. The out-shifting will go on for years. Having two repositories would force me to adjust sizes for these repositories as I have a limited physical disk space on the repository server. Making a repository smaller to make room for the other one to grow is a lot of handicraft work.

Do you see any negative aspects of combining block-cloning and deduplication except for the obvious problem with deduplication of the active chain?
HannesK
Product Manager
Posts: 14844
Liked: 3086 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: ReFS 3.0 and Dedup

Post by HannesK » 1 person likes this post

I don't like the idea in general to apply deduplication on block cloned data. It just sounds wrong to me. I mean, if I want bad performance, than I can go for an inline deduplication appliance :D

I like "keep it stupid simple" and if that means that it takes some more space... well that's life. But I know that many people don't agree with my opinion about "handicraft work" - so it's up to you to decide :-)
Wedge34
Lurker
Posts: 2
Liked: never
Joined: Jun 15, 2020 11:33 am
Contact:

[MERGED] Windows 2019 REFS with Dedup + VEEAM 10 dedup option

Post by Wedge34 »

Hello
i need advice about dedup / compression option in my backup job.

I have 2 x 50To local repositories REFS (64k) on a Windows Server 2019 with deduplication activated
* Align backup file data block
* Decompress backup datablock before storing
* Use per-VM backup files
are check

My job is configure to use incremental backup with synthetic full periodically

in storage tab i have
* Enable inline data deduplication check
* Compression level : None
* Storage optimization : Local target

i'm not sure about this 3 last option as it's deduplicated local storage. In Veeam configuration advice, i found only information about netwok deduplicated storage.

have you any advice about it
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: ReFS 3.0 and Dedup

Post by Gostev » 2 people like this post

The advice is not to use Windows 2019 ReFS in combination with Windows dedupe. This combination does not work too well together, please review the feedback above for more info.

Having said that, your planned settings look good in light of using Windows dedupe.

Thanks!
Wedge34
Lurker
Posts: 2
Liked: never
Joined: Jun 15, 2020 11:33 am
Contact:

Re: ReFS 3.0 and Dedup

Post by Wedge34 »

Great thank you for your answer.
for now, synthetic full is running on sunday and ends on monday. So no performance issues.
i was not sure about "inline data deduplucation option" as Windows server do the deduplication on the REFS volume.
rwgsa
Novice
Posts: 5
Liked: never
Joined: Feb 19, 2020 7:35 pm
Full Name: Raymond Wilson
Contact:

Re: ReFS 3.0 and Dedup

Post by rwgsa »

Gostev wrote: Jun 15, 2020 1:25 pm The advice is not to use Windows 2019 ReFS in combination with Windows dedupe. This combination does not work too well together, please review the feedback above for more info.

Having said that, your planned settings look good in light of using Windows dedupe.

Thanks!
Hi,

With all due respect, this is rubbish. What are you telling me here? My Veeam implementation is based on Server 2019 + ReFS + Windows Dedup (sized and specced). Have I been sold a turkey?

I have spent all weekend looking for a simple guide for optimizing Windows 2019 Dedup (not 2012, not 2016) and compatible Veeam settings. All i find is conflicitng advice on Compression Level, Inline Dedup, Local or LAN target (even though its direct attached storage) etc. It is about as clear as mud.

Why can you not put together a simple best practice guide for this and keep it up to date?
DonZoomik
Service Provider
Posts: 372
Liked: 120 times
Joined: Nov 25, 2016 1:56 pm
Full Name: Mihkel Soomere
Contact:

Re: ReFS 3.0 and Dedup

Post by DonZoomik »

Why create a best practice guide for a configuration that is explicitly not a best practice?
This has pretty good summary: https://www.veeam.com/blog/data-dedupli ... veeam.html I would argue that repository alignment doesn't matter, but that's nitpicking.
Other things don't matter much - only thing is that you can't have repo larger than 64TB (or maybe you can with hardware VSS provider - never tested it). Windows version doesn't really matter as core dedup engine has not changed much from 2012 (parallelism added in 2016, ReFS added in 2019). And still - you lose pretty much all ReFS benefits.
HannesK
Product Manager
Posts: 14844
Liked: 3086 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: ReFS 3.0 and Dedup

Post by HannesK »

Hi,
My Veeam implementation is based on Server 2019 + ReFS + Windows Dedup (sized and specced). Have I been sold a turkey?
maybe. I recommend to ask the person who sold that solution. From a sizing perspective, I do not expect that it makes a significant difference in disk space usage (I cannot imagine that anyone could size that precisely). Because to make Windows (same for other vendors) dedupe work better, you need to switch off Veeam dedupe / compression. Raising compression at the cost of CPU might also help you.
Why create a best practice guide for a configuration that is explicitly not a best practice?
agree. such a configuration will never end up in a best practice. 👍
if we require a reg key do allow something, then you can be sure, that we don't believe, that it is a good idea.

Best regards,
Hannes
rwgsa
Novice
Posts: 5
Liked: never
Joined: Feb 19, 2020 7:35 pm
Full Name: Raymond Wilson
Contact:

Re: ReFS 3.0 and Dedup

Post by rwgsa »

Hannes,

So rather than fiddle about with job settings, you have just told me, for my setup, I would be better switching off Veeam dedupe and compression entirley. Which contradicts the Veeam article DomZoomik just linked above you: https://www.veeam.com/blog/data-dedupli ... veeam.html

You have also mentioned a required registry key which I haven't set. Is this conifgured automatically in Veeam B&R 10 or do I need to Google yet another article to find out what it is and what is does?

Maybe 'best practice' is the wrong phrase but you need to create a consicse document with all this information in once place about Server 2019 + ReFS + Win Dedup. In that article you should make explicit that this is not a recommended configuration, because you have customers who are using it. Some of which may be getting more frustrated with each passing minute.
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Google [Bot] and 65 guests