-
- Enthusiast
- Posts: 60
- Liked: 11 times
- Joined: Sep 21, 2016 8:31 am
- Full Name: Kristian Leth
- Contact:
WS2022 - ReFS Memory leak
Hi,
With the reference to this topic veeam-backup-replication-f2/windows-ser ... 58-60.html , we decided to in-place upgrade our physical repo server from 2019 to 2022.
We used the ISO "SW_DVD9_Win_Server_STD_CORE_2022_2108.6_64Bit_English_DC_STD_MLF_X23-03231.ISO", since the latest one is crashing during in-place upgrade (https://docs.microsoft.com/en-us/answer ... ort=oldest).
We then patched the server to the newest CU (2022-08), and have been running for a few days now, and everything seems to be working fine.
But it appears the ReFS memory leak issue is back once again...
Our Repo server normally uses 12-20GB memory, but its constantly using 124GB out of the total 128GB memory in the server.
Have anyone else seen similar issues? Or should we create an ticket with Veeam / Microsoft, to debug further on this?
With the reference to this topic veeam-backup-replication-f2/windows-ser ... 58-60.html , we decided to in-place upgrade our physical repo server from 2019 to 2022.
We used the ISO "SW_DVD9_Win_Server_STD_CORE_2022_2108.6_64Bit_English_DC_STD_MLF_X23-03231.ISO", since the latest one is crashing during in-place upgrade (https://docs.microsoft.com/en-us/answer ... ort=oldest).
We then patched the server to the newest CU (2022-08), and have been running for a few days now, and everything seems to be working fine.
But it appears the ReFS memory leak issue is back once again...
Our Repo server normally uses 12-20GB memory, but its constantly using 124GB out of the total 128GB memory in the server.
Have anyone else seen similar issues? Or should we create an ticket with Veeam / Microsoft, to debug further on this?
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: WS2022 - ReFS Memory leak
Hi Kristian,
It would be best to work with our support team on it, please don't forget to share a support case ID.
Thanks!
It would be best to work with our support team on it, please don't forget to share a support case ID.
Thanks!
-
- Enthusiast
- Posts: 60
- Liked: 11 times
- Joined: Sep 21, 2016 8:31 am
- Full Name: Kristian Leth
- Contact:
Re: WS2022 - ReFS Memory leak
Case opened - 05589605
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Nov 29, 2016 3:57 pm
- Full Name: Michael Wilson
- Contact:
Re: WS2022 - ReFS Memory leak
Hello,
Did you get anywhere on this case? I've got two repo's exhibiting this same behavior... RAM starvation renders the server unresponsive shortly after boot. Both servers were recently in-place upgraded from 2019 to 2022. Believe they were previously in-place upgraded from 2016 to 2019, though not sure.
Upgrading the RAM on one of the repo's from 64GB to 196GB (50TB volume) solved the issue for now, but hard to be sure.
On the other repo, I tried the various tunable registry keys ( https://docs.microsoft.com/en-us/troubl ... usage-refs and https://docs.microsoft.com/en-us/troubl ... responsive ), but no change.
Did you get anywhere on this case? I've got two repo's exhibiting this same behavior... RAM starvation renders the server unresponsive shortly after boot. Both servers were recently in-place upgraded from 2019 to 2022. Believe they were previously in-place upgraded from 2016 to 2019, though not sure.
Upgrading the RAM on one of the repo's from 64GB to 196GB (50TB volume) solved the issue for now, but hard to be sure.
On the other repo, I tried the various tunable registry keys ( https://docs.microsoft.com/en-us/troubl ... usage-refs and https://docs.microsoft.com/en-us/troubl ... responsive ), but no change.
-
- Product Manager
- Posts: 9848
- Liked: 2607 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: WS2022 - ReFS Memory leak
Hi Michael
After reading Kristian's case notes, reinstalling OS on the repository server fixed the problem.
I can not tell what the problem was, but it looks like it was at the kernel level. Hopefully @ksl28 can provide more information from his discussions with Microsoft.
Either way, I recommend you contact our technical support to make sure the problem in your case is not on our end.
Thanks
Fabian
After reading Kristian's case notes, reinstalling OS on the repository server fixed the problem.
I can not tell what the problem was, but it looks like it was at the kernel level. Hopefully @ksl28 can provide more information from his discussions with Microsoft.
Either way, I recommend you contact our technical support to make sure the problem in your case is not on our end.
Thanks
Fabian
Product Management Analyst @ Veeam Software
-
- Enthusiast
- Posts: 60
- Liked: 11 times
- Joined: Sep 21, 2016 8:31 am
- Full Name: Kristian Leth
- Contact:
Re: WS2022 - ReFS Memory leak
Hi,
We reached out to Microsoft, since it clearly was an kernel level process that was memory leaking - RAMMap confirmed this.
But since we did an in-place upgrade, Microsoft then think of it as a new system - and Microsoft dont provide support for new systems, unless they previously havent had any issues.
I explained to them that we upgraded from an support version (WS2019) on supported hardware (Dell R640) to a supported version (WS2022), and that the upgrade completed without any issues.
I guess its a "new system" since the OS have changed, but all of the steps towards WS2022 was supported by Microsoft.
But they simply rejected us, with the reason that it was a new system... So we ended up doing a clean installation, and used Veeams amazing support to help us include it in the environment again.
PS: Guess who is actively converting all repos to Linux now, and migrating data from Azure blobs to other S3 vendors now, based on Microsofts lousy reply
We reached out to Microsoft, since it clearly was an kernel level process that was memory leaking - RAMMap confirmed this.
But since we did an in-place upgrade, Microsoft then think of it as a new system - and Microsoft dont provide support for new systems, unless they previously havent had any issues.
I explained to them that we upgraded from an support version (WS2019) on supported hardware (Dell R640) to a supported version (WS2022), and that the upgrade completed without any issues.
I guess its a "new system" since the OS have changed, but all of the steps towards WS2022 was supported by Microsoft.
But they simply rejected us, with the reason that it was a new system... So we ended up doing a clean installation, and used Veeams amazing support to help us include it in the environment again.
PS: Guess who is actively converting all repos to Linux now, and migrating data from Azure blobs to other S3 vendors now, based on Microsofts lousy reply
-
- Novice
- Posts: 9
- Liked: 1 time
- Joined: Nov 21, 2009 3:53 pm
- Full Name: Jose Garcia
- Contact:
Re: WS2022 - ReFS Memory leak
Hi,
I got the same issue after in-place upgrading a W2019 server with ReFS repo, that never had any memory issue, to W2022.
I opened a Veeam case, but was told that issue is obviously not from Veeam side..
As to not start arguing, used the following workaround for the moment:
Download the RamMap utility from MS/Sysinternals, and create a scheduled task (say once a day) with this:
RAMMap64.exe -es
This will empty the "system working set", from which the "metafile" cache is part of.
At least it's no longer necessary for us to reboot the server nearly every day to recover from that metafile cache file eating all memory (not even mentioning the jobs that were failing due to that).
I got the same issue after in-place upgrading a W2019 server with ReFS repo, that never had any memory issue, to W2022.
I opened a Veeam case, but was told that issue is obviously not from Veeam side..
As to not start arguing, used the following workaround for the moment:
Download the RamMap utility from MS/Sysinternals, and create a scheduled task (say once a day) with this:
RAMMap64.exe -es
This will empty the "system working set", from which the "metafile" cache is part of.
At least it's no longer necessary for us to reboot the server nearly every day to recover from that metafile cache file eating all memory (not even mentioning the jobs that were failing due to that).
Who is online
Users browsing this forum: Bing [Bot], restore-helper and 81 guests