-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Windows 2019, large REFS and deletes
Hello,
For about 2 months we use REFS now on a new 600 TB backend storage. 3 weeks ago we upgraded to a new, much faster server with Windows Server 2019. Now that the filesystem is filled 50 % one old REFS issue resurfaces again: When backups are deleted the REFS writes from other backups hang, then transfer data, hang again and so on.
It looks like this:
https://imgur.com/a/VzK8BV4
Thats causes long running backups and big snapshots.
Any idea which REFS reg setting could help with file deletes in W2019?
Markus
For about 2 months we use REFS now on a new 600 TB backend storage. 3 weeks ago we upgraded to a new, much faster server with Windows Server 2019. Now that the filesystem is filled 50 % one old REFS issue resurfaces again: When backups are deleted the REFS writes from other backups hang, then transfer data, hang again and so on.
It looks like this:
https://imgur.com/a/VzK8BV4
Thats causes long running backups and big snapshots.
Any idea which REFS reg setting could help with file deletes in W2019?
Markus
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Windows 2019, large REFS and deletes
Hello, Markus
Actually, no registry settings are required with the final ReFS fix from September 2019.
I have not head of similar reports yet, however here are some initial thoughts:
1. Did you do an in-place upgrade of your existing repository to Windows Server 2019, or clean install?
2. If in-place upgrade, did you re-format the volume? Windows Server 2019 has updated ReFS version, which may be causing the issue.
3. If you did format the volume, did you remember to use 64KB cluster size?
You're certainly an early adopter of ReFS on Windows Server 2019, so potentially the issue can be specific to 2019. However, I think this is less likely, because the "ultimate" ReFS fix was backported to Server 2016 from Server 2019 code in the first place.
What is more concerning to me is your ReFS volume size. At 50% capacity, it now has close to 1 billion of Veeam blocks in ReFS, and you may be running into the next ReFS bottleneck. Honestly, I'd be pleasantly surprised to learn that Microsoft tested block cloning on fully stuffed 600TB volumes.
Do you consider to try and downgrade to Windows Server 2016 as the first troubleshooting step, if everything worked well there?
Thanks!
Actually, no registry settings are required with the final ReFS fix from September 2019.
I have not head of similar reports yet, however here are some initial thoughts:
1. Did you do an in-place upgrade of your existing repository to Windows Server 2019, or clean install?
2. If in-place upgrade, did you re-format the volume? Windows Server 2019 has updated ReFS version, which may be causing the issue.
3. If you did format the volume, did you remember to use 64KB cluster size?
You're certainly an early adopter of ReFS on Windows Server 2019, so potentially the issue can be specific to 2019. However, I think this is less likely, because the "ultimate" ReFS fix was backported to Server 2016 from Server 2019 code in the first place.
What is more concerning to me is your ReFS volume size. At 50% capacity, it now has close to 1 billion of Veeam blocks in ReFS, and you may be running into the next ReFS bottleneck. Honestly, I'd be pleasantly surprised to learn that Microsoft tested block cloning on fully stuffed 600TB volumes.
Do you consider to try and downgrade to Windows Server 2016 as the first troubleshooting step, if everything worked well there?
Thanks!
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
Hello Gostev,
we also thought that after our long strugle with REFS we now have a safe version. We also have a W2016 remote site for backup copy jobs where according to event log there has been only one occurrence of slow IO - with very much inferior storage hardware. But these repos are only 100 GB so i guess for most customers REFS works perfectly.
1. Completely fresh install to prevent any issues
3. Sure! After all we have been through: "Bytes pro Cluster : 65536"
To be honest: This is our primary Veeam server and we still have other cases with Veeam support which need to be fixed. First thing we did was reducing retention and doing only one synthetic per week. I hope this helps and we don't have to downgrade...
BTW the reg settings should still work should they not? I mean we just need a more relaxed garbage collection...
Markus
we also thought that after our long strugle with REFS we now have a safe version. We also have a W2016 remote site for backup copy jobs where according to event log there has been only one occurrence of slow IO - with very much inferior storage hardware. But these repos are only 100 GB so i guess for most customers REFS works perfectly.
1. Completely fresh install to prevent any issues
3. Sure! After all we have been through: "Bytes pro Cluster : 65536"
To be honest: This is our primary Veeam server and we still have other cases with Veeam support which need to be fixed. First thing we did was reducing retention and doing only one synthetic per week. I hope this helps and we don't have to downgrade...
BTW the reg settings should still work should they not? I mean we just need a more relaxed garbage collection...
Markus
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
Hello Gostev,
one more thing: I have the feeling not only the size of the volume is problematic but also the number of deleted files. It only was visible for us with jobs with 100-200 VMs so every week about 700 points must be deleted. That leads me to an old feature request from the early REFS days:
Is it possible to somehow "delay" the deletes (max 1 delete per minute or so)? That way REFS gets time to "breathe".
Markus
one more thing: I have the feeling not only the size of the volume is problematic but also the number of deleted files. It only was visible for us with jobs with 100-200 VMs so every week about 700 points must be deleted. That leads me to an old feature request from the early REFS days:
Is it possible to somehow "delay" the deletes (max 1 delete per minute or so)? That way REFS gets time to "breathe".
Markus
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Windows 2019, large REFS and deletes
Actually, after looking at the big data, I'm starting to suspect your new hardware (or Server 2019) are the reason...
Apparently, things moved really fast in the past 6 months since the "ultimate" ReFS fix was shipped, and ReFS adoption really skyrocketed both in term of number and capacity of backup repositories. At this time, 7.5% of all ReFS repositories used by Veeam customers are over 400TB in size, which in absolute numbers means a few thousands of repositories like your own. Thus, I'd expect any issues due to large repository size would have been reported by other users before - instead of 6 months of complete silence.
We had the code with delays between deletions during retention processing in early days of ReFS troubleshooting - but it made zero difference at the time, because the ultimate issue was in system memory manager logic tweaked for NTFS. Nevertheless, I will check if it was a custom code, or if it survived in the current branch behind a registry key.
Apparently, things moved really fast in the past 6 months since the "ultimate" ReFS fix was shipped, and ReFS adoption really skyrocketed both in term of number and capacity of backup repositories. At this time, 7.5% of all ReFS repositories used by Veeam customers are over 400TB in size, which in absolute numbers means a few thousands of repositories like your own. Thus, I'd expect any issues due to large repository size would have been reported by other users before - instead of 6 months of complete silence.
We had the code with delays between deletions during retention processing in early days of ReFS troubleshooting - but it made zero difference at the time, because the ultimate issue was in system memory manager logic tweaked for NTFS. Nevertheless, I will check if it was a custom code, or if it survived in the current branch behind a registry key.
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
Whats safe to say now that i looked at deletes from the backups that just ran: It only seems to occour if there are lots of restore points that are getting deleted.
A job where only ~20 points were deleted did show no impact on other jobs whatsoever.
I wonder - would disabling per-VM backup chains help?
I also thought about these two things (server and windows). I really do not know how to rule out server hardware. We had some performance issues for which we got a private fix today but that was for some specific SQL queries but normal usage shows excellent performance from things like Veeam agents. Only deletes are impacted, block clones and active fulls are very fast (in the past active fulls were always an issue for themself). Even tape is slightly faster than with the old server.
This leaves windows 2019. We were trying to open a ticket with MS the whole day but had no luck until now - but we will keep trying.
I wonder if these other customers all use per-VM, have > 2000 VM and use synthetic fulls.
Markus
A job where only ~20 points were deleted did show no impact on other jobs whatsoever.
I wonder - would disabling per-VM backup chains help?
I also thought about these two things (server and windows). I really do not know how to rule out server hardware. We had some performance issues for which we got a private fix today but that was for some specific SQL queries but normal usage shows excellent performance from things like Veeam agents. Only deletes are impacted, block clones and active fulls are very fast (in the past active fulls were always an issue for themself). Even tape is slightly faster than with the old server.
This leaves windows 2019. We were trying to open a ticket with MS the whole day but had no luck until now - but we will keep trying.
I wonder if these other customers all use per-VM, have > 2000 VM and use synthetic fulls.
Markus
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Windows 2019, large REFS and deletes
I'm aware of a pretty significant number of customers that are backing up between 1000-3000 VMs on a single servers with ~500TB storage and ReFS with synthetic fulls with per-VM, but admittedly, every one I'm aware of are all still on Windows 2016. After all of the issues with the early days of ReFS, none of them have been willing to move forward with Windows 2019 to this point, at least, none I've spoken to in the last month or so. Of course, I have no idea if Win 2019 is the actual issue, but it wouldn't surprise me if somehow a regression managed to sneak in. I bet if Anton checks his big data, a pretty tiny fraction of those 7.5% are running on Windows 2019 at this point.
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Windows 2019, large REFS and deletes
Right, in fact I am not seeing a single Server 2019 based backup repository yet
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
Argh... We waited getting a new Veeam server until W2019 is there. And it is indeed really strange that we had no issues with REFS on the old, slow server with less RAM and with W2019.
So our only hope is MS right now again.
So our only hope is MS right now again.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
-
- Veteran
- Posts: 528
- Liked: 144 times
- Joined: Aug 20, 2015 9:30 pm
- Contact:
Re: Windows 2019, large REFS and deletes
I just built a new Server 2019 backup repository, but haven't started using it yet. Will definitely report on my experiences.
Markus, just wondering, are you doing weekly synthetic fulls? Perhaps doing forever forward incremental will be better so that retention point deletions happen every day instead of once a week. That way less is deleted at once. All of my backups are either forward forever, or when using a weekly synthetic full, the option to convert previous chains into rollbacks is enabled. Either way, one retention point is deleted daily instead of 7 at once.
Markus, just wondering, are you doing weekly synthetic fulls? Perhaps doing forever forward incremental will be better so that retention point deletions happen every day instead of once a week. That way less is deleted at once. All of my backups are either forward forever, or when using a weekly synthetic full, the option to convert previous chains into rollbacks is enabled. Either way, one retention point is deleted daily instead of 7 at once.
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
Yes, weekly synthetics. Forever incremental is not really an option as we need to be able to restore anytime, even when the backup is running.
Rollbacks lead to extensive fragmentation with REFS and was thus not recommended correct? Or do i remember wrong? Also the restore points are locked for that procedure correct?
Rollbacks lead to extensive fragmentation with REFS and was thus not recommended correct? Or do i remember wrong? Also the restore points are locked for that procedure correct?
-
- Service Provider
- Posts: 30
- Liked: 2 times
- Joined: Sep 15, 2012 8:01 pm
- Full Name: Kelly Michael Knowles
- Contact:
Re: Windows 2019, large REFS and deletes
There may be some bugs still with Windows 2019. I had a Veeam datastore and tape library server running 2019 with REFS in January where it would eventually slow down with WMI taking up more and more CPU until it was unusable. It took a reboot to clear it out temporarily and then would worsen. This seemed to be triggered by larger file transfers such as tape copy jobs but after a couple of weeks of troubleshooting and an additional clean reinstall of Windows Server 2019, I was forced to downgrade to Windows 2016 to get back stability. I tried both Veeam 9.5U3 and 9.5U4 but I think the WMI issue was the OS itself. Luckily it was not my primary backup server so I was able to keep my jobs and just re-push agents.
Kelly Knowles
Principal Systems Architect at PNJ Technology Partners
Veeam Certified Architect and Veeam Certified Engineer - Advanced: Design & Optimization
Principal Systems Architect at PNJ Technology Partners
Veeam Certified Architect and Veeam Certified Engineer - Advanced: Design & Optimization
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
WMI... Interesting. We also cannot monitor CPU with PRTG of our W2019 backup server without timeouts appearing...
-
- Veteran
- Posts: 528
- Liked: 144 times
- Joined: Aug 20, 2015 9:30 pm
- Contact:
Re: Windows 2019, large REFS and deletes
FYI sounds like there may be ReFS fixes for 2019 coming soon. Saw this in a comment on a Microsoft blog
https://cloudblogs.microsoft.com/window ... nter-2019/
https://cloudblogs.microsoft.com/window ... nter-2019/
DPM 2019 contains performance improvements making DPM more stable and performant compared to DPM 2016. With DPM 2019 changes we recommend using tiered storage (using SSD), which will result in 70-75% gain in backup speeds compared to DPM 2016. Also, in near future Windows team would be releasing few ReFS fixes on top of WS 2019 further enhancing the DPM performance.
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
It seems like a new refs.sys just came out with kb4489899: Version 10.0.17763.379 (old was 348)....
[UPDATE] 10.0.17763.379 does not solve the problems with deletions!
[UPDATE] 10.0.17763.379 does not solve the problems with deletions!
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
Strange thing happened with our system: 2 days ago all writes stalled on our REFS and we only got about 10 MB/s to the repo instead of ~500 MB/s. The same we had with early versions of Windows 2016 REFS. Fast clones took 22 hours instead of 1 hour.
We did severaly tests with REFS settings, none helped. But what seemed to help was disable of the Intel Processor security mitigations.
We already had Retpoline patch enabled.
Did anyone else have issues with these patches?
We did severaly tests with REFS settings, none helped. But what seemed to help was disable of the Intel Processor security mitigations.
We already had Retpoline patch enabled.
Did anyone else have issues with these patches?
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
Sadly, disabling the patch did not help. After some deletes fast clone rate went way down.
We got a test tool from veeam (block-clone-spd). When we test after the jobs ran we get a fast clone rate of 63 MB/s. After a fresh reboot rate goes to 3 GB/s!!
Right now we are thinking about going back to NTFS again...
We got a test tool from veeam (block-clone-spd). When we test after the jobs ran we get a fast clone rate of 63 MB/s. After a fresh reboot rate goes to 3 GB/s!!
Right now we are thinking about going back to NTFS again...
-
- Expert
- Posts: 160
- Liked: 28 times
- Joined: Sep 29, 2017 8:07 pm
- Contact:
Re: Windows 2019, large REFS and deletes
Curious if this month's updates modified anything. My backup server is bluescreening a lot and I am thinking I might need to reformat (currently 2016) and might move to 2019, but really not liking the fact that this is like 2016 all over again.
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
@Mgamerz I have only one good thing to report: Gostev pulled some strings for us (thank you again if you read this) and we are in direct contact with the REFS developers. They are extremly interested in our problem and want to fix it ASAP.
Generally if your backup files and/or the volume are not that big or you are using a backup method not deleting much (this is our only but very bad issue!) you might not even have an issue. Also, if you do not have many concurrent jobs.
And: We did not have one bluescreen with server 2019. But we have enough RAM (768 GB) dedicated for the repo server so this might help with that. RAM usage often goes to > 300 GB when files are beeing deleted.
Generally if your backup files and/or the volume are not that big or you are using a backup method not deleting much (this is our only but very bad issue!) you might not even have an issue. Also, if you do not have many concurrent jobs.
And: We did not have one bluescreen with server 2019. But we have enough RAM (768 GB) dedicated for the repo server so this might help with that. RAM usage often goes to > 300 GB when files are beeing deleted.
-
- Novice
- Posts: 4
- Liked: never
- Joined: Apr 27, 2016 6:50 am
- Full Name: Gruber Markus
- Contact:
Re: Windows 2019, large REFS and deletes
Looks like i run in the same problem like mkretzer. Dell R740xd with 2x512SSD for OS and Veeam programs, 12x4TB in an Raid60 with ReFS for the repository, Server OS 2019. In the first week i don't see any performance issues as we are only writing the backups to the disc, but then i tried to copy the backups to tape and then the read speed of the files went down to 5MB/s. As a simple copy&paste of a 300GB file has the same worst speed, it's not a problem of Veeam, so for the moment Dell is also searching for the reason.
If there are any news from Microsoft please also post it here.
With best regards
Markus
If there are any news from Microsoft please also post it here.
With best regards
Markus
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Windows 2019, large REFS and deletes
If possible, try using Windows Server 1903 as the backup repository. Apparently, it has significant architectural improvements around ReFS metadata handling over vanilla Server 2019.
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
We are in direct contact with microsoft devs (thanks to Gostev and our not very small volumes).
1903 and not too big volumes (the 600 Tb we have seem to be a little bit too much) was their recommendation for now. But they are really eager to get this fixed for good (for the first time i have to really praise the microsoft guys we are talking to!).
We will try to implement this with a new system and a new storage. I think it will take a few weeks until we have results which i can post here.
1903 and not too big volumes (the 600 Tb we have seem to be a little bit too much) was their recommendation for now. But they are really eager to get this fixed for good (for the first time i have to really praise the microsoft guys we are talking to!).
We will try to implement this with a new system and a new storage. I think it will take a few weeks until we have results which i can post here.
-
- Novice
- Posts: 4
- Liked: never
- Joined: Apr 27, 2016 6:50 am
- Full Name: Gruber Markus
- Contact:
Re: Windows 2019, large REFS and deletes
I'll try to get my hands on the 1903 iso, as Microsoft has not published it in my VLSC, i need to search for another source.
-
- Service Provider
- Posts: 17
- Liked: 2 times
- Joined: Jan 19, 2015 3:19 pm
- Full Name: Bret Esquivel
- Contact:
Re: Windows 2019, large REFS and deletes
Any updates on this? We're experiencing the same issues with our 2019 ReFS repository.
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
We just received new hardware and will switch to 1903 Server to test if that helps. I will keep this thread updated but expect results in 4-6 weeks (after first deletes).
-
- Service Provider
- Posts: 17
- Liked: 2 times
- Joined: Jan 19, 2015 3:19 pm
- Full Name: Bret Esquivel
- Contact:
Re: Windows 2019, large REFS and deletes
Just to add some details, and our experience. It would be great to hear any next steps or troubleshooting tips.
* System is a Veeam Cloud Connect Repository, ~150TB used out of 250TB
* RAID50, Microsemi (Adaptec) 3154-8i controller
* 64GB RAM, hovers around 50% usage
* Windows Server 2019 1809, Build 17763.615
* ReFS.sys version 10.0.17763.592
* ReFsv1.sys version 10.0.17763.404
* Fully patched with latest CU
* ReFS 64K blocks
* No BSOD experienced ever
After a few days (sometimes hours, or day), the system essentially grinds to a halt. At a peek at clients' VBR jobs, typically a fast clone merge is happening.
Only thing that sticks out is that the System process is running around 60% whereas VeeamAgent.exe and others are relatively negligible. Running Windows Performance Analyzer to dig into the System process show the spike all coming from ntoskrnl.exe -> ReFS.SYS
* System is a Veeam Cloud Connect Repository, ~150TB used out of 250TB
* RAID50, Microsemi (Adaptec) 3154-8i controller
* 64GB RAM, hovers around 50% usage
* Windows Server 2019 1809, Build 17763.615
* ReFS.sys version 10.0.17763.592
* ReFsv1.sys version 10.0.17763.404
* Fully patched with latest CU
* ReFS 64K blocks
* No BSOD experienced ever
After a few days (sometimes hours, or day), the system essentially grinds to a halt. At a peek at clients' VBR jobs, typically a fast clone merge is happening.
Only thing that sticks out is that the System process is running around 60% whereas VeeamAgent.exe and others are relatively negligible. Running Windows Performance Analyzer to dig into the System process show the spike all coming from ntoskrnl.exe -> ReFS.SYS
Code: Select all
Line #, Process, Stack Tag, Stack (Frame Tags), Stack, Thread Activity Tag, Count, Weight (in view) (ms), TimeStamp (s), % Weight
2, System (4), , , , , 485792, 485,806.329000, , 30.80
3, , Other, [Root], , , 312174, 312,180.746400, , 19.79
4, , , |- ntoskrnl.exe!KiStartSystemThread, , , 312169, 312,175.749200, , 19.79
5, , , | ntoskrnl.exe!PspSystemThreadStartup, , , 312169, 312,175.749200, , 19.79
6, , , | |- ntoskrnl.exe!ExpWorkerThread, , , 309442, 309,447.880800, , 19.62
7, , , | | |- ReFS.SYS!MspWorkerRoutine, , , 306853, 306,854.972900, , 19.46
8, , , | | | |- ReFS.SYS!CmsVolumeContainer::PersistContainerCacheWorkerCallback, , , 297124, 297,123.507800, , 18.84
9, , , | | | |- ReFS.SYS!MspCheckpointVolume, , , 8138, 8,138.670900, , 0.52
10, , , | | | |- ReFS.SYS!CmsContainerRangeMap::ParallelWorkItemBucket, , , 1140, 1,140.834500, , 0.07
11, , , | | | |- ReFS.SYS!CmsBPlusTable::TreeUpdateWorkParallelWorkItem, , , 255, 255.580500, , 0.02
12, , , | | | |- ReFS.SYS!MspLazyWriterWorker, , , 83, 83.109700, , 0.01
13, , , | | | |- ReFS.SYS!SortWritePlanWorker, , , 66, 66.218300, , 0.00
14, , , | | | |- ReFS.SYS!CmsBPlusTable::DiscardPagesWorker, , , 29, 29.053100, , 0.00
15, , , | | | |- ReFS.SYS!MspTrimWorkerFn, , , 12, 12.001500, , 0.00
16, , , | | | |- ReFS.SYS!CmsDurableLog::WriteLogWorkRoutine, , , 6, 5.996600, , 0.00
17, , , | | |- ntoskrnl.exe!CcWorkerThread, , , 1990, 1,994.492500, , 0.13
18, , , | | |- ntoskrnl.exe!MiRebalanceZeroFreeLists, , , 229, 227.398300, , 0.01
19, , , | | |- ntoskrnl.exe!PnpDeviceActionWorker, , , 165, 165.066400, , 0.01
20, , , | | |- Ntfs.sys!NtfsCheckUsnTimeOut, , , 161, 161.938200, , 0.01
21, , , | | |- ReFS.SYS!RefsFspDispatch, , , 14, 14.000000, , 0.00
22, , , | | |- ntoskrnl.exe!ExpHpCompactionRoutine, , , 5, 5.000000, , 0.00
23, , , | | |- Ntfs.sys!NtfsCheckpointAllVolumes, , , 4, 4.000000, , 0.00
24, , , | | |- Ntfs.sys!NtfsFspClose, , , 4, 4.000000, , 0.00
25, , , | | |- ntoskrnl.exe!CmpDelayDerefKCBWorker, , , 3, 3.000000, , 0.00
26, , , | | |- ntoskrnl.exe!SepRmCallLsa, , , 3, 3.000000, , 0.00
27, , , | | |- rdbss.sys!RxpProcessWorkItem, , , 3, 3.000000, , 0.00
28, , , | | |- ntoskrnl.exe!KeRemovePriQueue, [Root], , 2, 2.012500, , 0.00
29, , , | | |- Ntfs.sys!NtfsMarkUnusedContextPreTrimWorkItemProcessing, , , 2, 2.000000, , 0.00
30, , , | | |- ntoskrnl.exe!ExpWorkerThread<itself>, [Root], , 1, 1.000000, , 0.00
31, , , | | |- NDIS.SYS!ndisQueuedCheckForHang, , , 1, 1.000000, , 0.00
32, , , | | |- ntoskrnl.exe!PopPolicyWorkerThread, , , 1, 1.000000, , 0.00
33, , , | | |- ntoskrnl.exe!PspReaper, , , 1, 1.000000, , 0.00
34, , , | |- ntoskrnl.exe!KeBalanceSetManager, , , 1594, 1,594.021100, , 0.10
35, , , | |- ntoskrnl.exe!MiZeroPageThread, , , 1025, 1,025.801200, , 0.07
36, , , | |- ntoskrnl.exe!MiMappedPageWriter, , , 64, 64.020800, , 0.00
37, , , | |- iaStorE.sys!?, , , 12, 12.013000, , 0.00
38, , , | |- dxgmms2.sys!VidSchiWorkerThread, , , 10, 10.000000, , 0.00
39, , , | |- ACPI.sys!ACPIWorkerThread, , , 7, 7.000000, , 0.00
40, , , | |- dxgkrnl.sys!BLTQUEUE::BltQueueWorkerThread, , , 7, 7.000000, , 0.00
41, , , | |- BasicRender.sys!WARPKMADAPTER::WarpGPUWorkerThread, , , 4, 4.012300, , 0.00
42, , , | |- HTTP.sys!UlpThreadPoolWorker, , , 1, 1.000000, , 0.00
43, , , | |- ntoskrnl.exe!ExpWorkQueueManagerThread, , , 1, 1.000000, , 0.00
44, , , | |- ntoskrnl.exe!ExpWorkerFactoryManagerThread, [Root], , 1, 1.000000, , 0.00
45, , , | |- ntoskrnl.exe!KeSwapProcessOrStack, , , 1, 1.000000, , 0.00
46, , , |- SmartPqi.sys!?, , , 5, 4.997200, , 0.00
47, , n/a, n/a, n/a, , 173314, 173,321.338400, , 10.99
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
@bretesq We only experienced these issues when there are have been deletes in our current system.
We completely disabled deletion of old backups and our system runs without any issues. But we have ~200 disks in the raid system and a quite large cache.
Has there been deleted anything at that time?
What we also experienced in the past was that when our backend storage was too slow RAM usage creeps up and up (it seems to cache quite alot in RAM) and then at some point a limit is reached and then suddenly IO stalls as it tries to get the data to the backend system. This feels completely different than with NTFS as NTFS would not "cache" so much writes.
How large are your write latencies of the backend?
We completely disabled deletion of old backups and our system runs without any issues. But we have ~200 disks in the raid system and a quite large cache.
Has there been deleted anything at that time?
What we also experienced in the past was that when our backend storage was too slow RAM usage creeps up and up (it seems to cache quite alot in RAM) and then at some point a limit is reached and then suddenly IO stalls as it tries to get the data to the backend system. This feels completely different than with NTFS as NTFS would not "cache" so much writes.
How large are your write latencies of the backend?
-
- Service Provider
- Posts: 13
- Liked: 2 times
- Joined: Oct 25, 2018 11:33 am
- Full Name: Yaroslav
- Contact:
Re: Windows 2019, large REFS and deletes
Hello.
Has anyone checked affects of the parameter TRIM/unmap to this issue?
In 2016, the server for refs the trim parameter was disabled by default.
In 2019 it was turned on again.
based on https://docs.microsoft.com/en-us/window ... s-overview you can see that TRIM on the refs is supported only on the storage spaces
Has anyone checked affects of the parameter TRIM/unmap to this issue?
In 2016, the server for refs the trim parameter was disabled by default.
Code: Select all
fsutil behavior query DisableDeleteNotify
NTFS DisableDeleteNotify = 0
ReFS DisableDeleteNotify is not currently set
Code: Select all
fsutil behavior query DisableDeleteNotify
NTFS DisableDeleteNotify = 0 (Disabled)
ReFS DisableDeleteNotify = 0 (Disabled)
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Windows 2019, large REFS and deletes
No i have not - this is a good idea i think! I have disabled it for now as we have no benefit in trim on our system!
Who is online
Users browsing this forum: Bing [Bot], Google [Bot], Semrush [Bot] and 69 guests