-
- Influencer
- Posts: 14
- Liked: 1 time
- Joined: Nov 12, 2021 3:03 pm
- Full Name: Marc K.
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
Maybe someone can answer some questions, while fighting this issue, is there a criticle size of cluster shared volume or VHDX or both that triggers the problem?!
At normal we put some application VMs together on a larger volume. Thats a bit more easy to manage and on our older storage it was space saving. On the new Powerstore SAN "space saving" is not the point, because the Powerstore is a global thinprovisioning device, but from the management point of view, a lower count of volumes is better to handle.
Next point is, is it better to use NTFS or ReFS for the volumes that have the problem?! I am unsure, it "feels" like ReFS is less performant than NTFS in our constellation.
And somewhere i read that disabling Hyperthreading could help, any confirms?
Thanks!
At normal we put some application VMs together on a larger volume. Thats a bit more easy to manage and on our older storage it was space saving. On the new Powerstore SAN "space saving" is not the point, because the Powerstore is a global thinprovisioning device, but from the management point of view, a lower count of volumes is better to handle.
Next point is, is it better to use NTFS or ReFS for the volumes that have the problem?! I am unsure, it "feels" like ReFS is less performant than NTFS in our constellation.
And somewhere i read that disabling Hyperthreading could help, any confirms?
Thanks!
-
- Enthusiast
- Posts: 52
- Liked: 12 times
- Joined: Aug 26, 2019 7:04 am
- Full Name: Christine Boersen
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
(I was the one who discussed the Hyperthreading helping).
Though HT is a partial LAST RESORT fix, the losing HT is WAY WAY too big an issue in the long run. I quit needing that fix a few years ago. It's your "last restore" option.
Much less drastic things, all of which improve one aspect or another of our backups over the years were
- Not mixing SAS and SATA on the same bus. 12th gen dell SAS6 we a little sensitive (putting a 12Gbps HBA controller onto the 6Gpbs backplane solves the mixture issue on the 12th gen backplane.- We had a few older 6Gbps disk shelves that were sensitive to this as well.
- Not mixing "Spinning rust" with "SSD" on the same bus, without testing . Same issues as with SAS/SATA mixture. Some bus's/shelves are sensitive to the mixing (and some explicitly tell you not to, appear to work, then you end up with randomish errors UNDER LOAD)
- Ensure you don't have too many I/O threads on the Backup store *OR* the source volumes. You *WILL* timeout a workload if those are set too high on S2D.
- pay attention to the latency control thresholds in Veeam. SSD and especially Fast NVMe, need latencies MUCH lower than you would thing (like 5,10 ms at most)
Additionally,
Moving to Server 2022 had some OS improvements as time went on, and then Server 2025 server helped as the machines were upgraded as those OS's came out (within 6 months of GA)
Hope that helps.
Though HT is a partial LAST RESORT fix, the losing HT is WAY WAY too big an issue in the long run. I quit needing that fix a few years ago. It's your "last restore" option.
Much less drastic things, all of which improve one aspect or another of our backups over the years were
- Not mixing SAS and SATA on the same bus. 12th gen dell SAS6 we a little sensitive (putting a 12Gbps HBA controller onto the 6Gpbs backplane solves the mixture issue on the 12th gen backplane.- We had a few older 6Gbps disk shelves that were sensitive to this as well.
- Not mixing "Spinning rust" with "SSD" on the same bus, without testing . Same issues as with SAS/SATA mixture. Some bus's/shelves are sensitive to the mixing (and some explicitly tell you not to, appear to work, then you end up with randomish errors UNDER LOAD)
- Ensure you don't have too many I/O threads on the Backup store *OR* the source volumes. You *WILL* timeout a workload if those are set too high on S2D.
- pay attention to the latency control thresholds in Veeam. SSD and especially Fast NVMe, need latencies MUCH lower than you would thing (like 5,10 ms at most)
Additionally,
Moving to Server 2022 had some OS improvements as time went on, and then Server 2025 server helped as the machines were upgraded as those OS's came out (within 6 months of GA)
Hope that helps.
-
- Influencer
- Posts: 14
- Liked: 1 time
- Joined: Nov 12, 2021 3:03 pm
- Full Name: Marc K.
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
Ah OK. We have an all flash iSCSI SAN and the Veeam Server is having 12Gps SAS disks. All is pretty new, Q1/2025. Because the project was started last year and closed, Windows 2025 was not to get at Dell and they was not able to offer us an update for the project later and my budget did not allow an "external" update
.
Yesterday i give the Agent Backup an other try... strange there, in the strating time ~2-3 minutes, also the latency goes high ~10.000ms but then comes down from allone for the rest of the backup.

Yesterday i give the Agent Backup an other try... strange there, in the strating time ~2-3 minutes, also the latency goes high ~10.000ms but then comes down from allone for the rest of the backup.
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Jul 11, 2024 1:22 am
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
Is the fix still being enabled via reg key? I've installed June CU for Server 2022 and the reg key mentioned in the Veeam KB is not present. Is the fix enabled via alternative method when this CU is installed?
-
- Service Provider
- Posts: 646
- Liked: 163 times
- Joined: Apr 03, 2019 6:53 am
- Full Name: Karsten Meja
- Contact:
-
- Service Provider
- Posts: 7
- Liked: 1 time
- Joined: Apr 23, 2025 11:11 am
- Full Name: Mark Løgtved Møller
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
So We just updated a cluster what was fixed back in febuary. with KB5063880.
Now it looks like we are having the issue again. Our work around before was during a rotate of all VM's with at script I have enabled this again.
But by the looks it seems it takes longer before the VM's gets a larger latency.
Is there anyone else seeing the same issue ?
I still need to troubleshoot more to completly confirm if the issue is the same. but by the looks a live migration fixes the latency.
Now it looks like we are having the issue again. Our work around before was during a rotate of all VM's with at script I have enabled this again.
But by the looks it seems it takes longer before the VM's gets a larger latency.
Is there anyone else seeing the same issue ?
I still need to troubleshoot more to completly confirm if the issue is the same. but by the looks a live migration fixes the latency.
-
- Veteran
- Posts: 259
- Liked: 39 times
- Joined: Jun 15, 2009 10:49 am
- Full Name: Gabrie van Zanten
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
Are you running 2022 or 2025? We just upgraded from 2016 to 2022 (May build) and so far no issues.So We just updated a cluster what was fixed back in febuary. with KB5063880.
Now it looks like we are having the issue again. Our work around before was during a rotate of all VM's with at script I have enabled this again.
But by the looks it seems it takes longer before the VM's gets a larger latency.
(Ok, well one issue that we had implemented the MSI, but forgot to enable the new regkey)
-
- Service Provider
- Posts: 7
- Liked: 1 time
- Joined: Apr 23, 2025 11:11 am
- Full Name: Mark Løgtved Møller
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
The clusters are running 2022.
To my knowledge the May/June update supersedes the the update fix, and the fix is now enabled by default after that.
To my knowledge the May/June update supersedes the the update fix, and the fix is now enabled by default after that.
-
- Influencer
- Posts: 14
- Liked: 1 time
- Joined: Nov 12, 2021 3:03 pm
- Full Name: Marc K.
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
At our side, HyperV 2022 is runnig now on CU9, it looks like CU9 is way wores than CU8. We have much more trouble again than with CU8. CU8 was not really perfect, but better then CU9.
Also i cant say that using the Agent as a workaround solves the problem. Fullbackup with the Agent 100% causes the problem, at least at the starting phase.
Also it looks like most problamatic VMs are the migrated ones from our old 2019-Cluster. But some new ones are even bad. And it seems not at to be a matter of VM size.
I am really thinking of migration from HyperV to Proxmox.
Is there any other place out here where you can read about this problem then here?!
Also i cant say that using the Agent as a workaround solves the problem. Fullbackup with the Agent 100% causes the problem, at least at the starting phase.
Also it looks like most problamatic VMs are the migrated ones from our old 2019-Cluster. But some new ones are even bad. And it seems not at to be a matter of VM size.
I am really thinking of migration from HyperV to Proxmox.
Is there any other place out here where you can read about this problem then here?!
-
- Service Provider
- Posts: 7
- Liked: 1 time
- Joined: Apr 23, 2025 11:11 am
- Full Name: Mark Løgtved Møller
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
I have been looking more into it, and for our environment we actually found a SFP module that was faulty towards the SAN.
That caused a lot of issues similar to what we have seen here.
I still think that we have some dust on the fiber cable, as we still observe minor errors, however the customer does not have any issues anymore, and everything is running smoothly.
So for us it is still working the issue was caused by a faulty SFP module that went on an off.
That caused a lot of issues similar to what we have seen here.
I still think that we have some dust on the fiber cable, as we still observe minor errors, however the customer does not have any issues anymore, and everything is running smoothly.
So for us it is still working the issue was caused by a faulty SFP module that went on an off.
-
- Influencer
- Posts: 20
- Liked: 7 times
- Joined: Jan 16, 2023 3:13 pm
- Full Name: Joel G
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
We upgraded our cluster to 2022 in April and implemented the fix. It seemed to resolve the issues for us at that time.logtved wrote: ↑Sep 02, 2025 12:56 pm So We just updated a cluster what was fixed back in febuary. with KB5063880.
Now it looks like we are having the issue again. Our work around before was during a rotate of all VM's with at script I have enabled this again.
But by the looks it seems it takes longer before the VM's gets a larger latency.
Is there anyone else seeing the same issue ?
I still need to troubleshoot more to completly confirm if the issue is the same. but by the looks a live migration fixes the latency.
I export the events to csv every week so I can track them in a spreadsheet - I hadn't checked it in a while so I thought I should do that after seeing your post...
- Previous to the fix we were getting over 5000 errors per day (not including our evening backup window)
- Between May 28th-Oct 6th (when I switched to weekly reports instead of daily) we have been getting about 200 errors per day (also not including our evening backup window)
Joel
-
- Service Provider
- Posts: 7
- Liked: 1 time
- Joined: Apr 23, 2025 11:11 am
- Full Name: Mark Løgtved Møller
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
Hi Joel
200 Errors a day is still to much.
We also had a lot of errors, but was mitigated via a script that rotated all VM's between the hosts.
Now we do not have any errors at all.
200 Errors a day is still to much.
We also had a lot of errors, but was mitigated via a script that rotated all VM's between the hosts.
Now we do not have any errors at all.
-
- Influencer
- Posts: 14
- Liked: 1 time
- Joined: Nov 12, 2021 3:03 pm
- Full Name: Marc K.
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
So you dont know if the problem is really solved, because you still rotate VMs?!
-
- Service Provider
- Posts: 7
- Liked: 1 time
- Joined: Apr 23, 2025 11:11 am
- Full Name: Mark Løgtved Møller
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
MarcK
Not anymore, we did until we got the fix back in Febuary.
But after we updated the cluster we began to see the issues again, and started the rotate - however after more troubleshooting we discovered it was due to a faulty SFP adapter.
and now we are not rotating any more.
It has been running like this for about 2 weeks without any errors. (is is about 2 weeks ago we replace the SFP)
Not anymore, we did until we got the fix back in Febuary.
But after we updated the cluster we began to see the issues again, and started the rotate - however after more troubleshooting we discovered it was due to a faulty SFP adapter.
and now we are not rotating any more.
It has been running like this for about 2 weeks without any errors. (is is about 2 weeks ago we replace the SFP)
-
- Influencer
- Posts: 14
- Liked: 1 time
- Joined: Nov 12, 2021 3:03 pm
- Full Name: Marc K.
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
Ah OK, it was out of my view that the info with the SPF was from you.
We also placed a ticket at our reseller for a review of our system. I hope that we can find any problem at the hardware or setup.
May you can show your rotating script for the time until we find any other error?!
We also placed a ticket at our reseller for a review of our system. I hope that we can find any problem at the hardware or setup.
May you can show your rotating script for the time until we find any other error?!
-
- Service Provider
- Posts: 7
- Liked: 1 time
- Joined: Apr 23, 2025 11:11 am
- Full Name: Mark Løgtved Møller
- Contact:
Re: Windows Server 2019 Hyper-V VM I/O Performance Problem
MarcK
Sure I have sent you a message via the forum.
Sure I have sent you a message via the forum.
Who is online
Users browsing this forum: No registered users and 3 guests