Host-based backup of Microsoft Hyper-V VMs.
Post Reply
MarcK
Influencer
Posts: 11
Liked: never
Joined: Nov 12, 2021 3:03 pm
Full Name: Marc K.
Contact:

Re: Windows Server 2019 Hyper-V VM I/O Performance Problem

Post by MarcK »

Maybe someone can answer some questions, while fighting this issue, is there a criticle size of cluster shared volume or VHDX or both that triggers the problem?!
At normal we put some application VMs together on a larger volume. Thats a bit more easy to manage and on our older storage it was space saving. On the new Powerstore SAN "space saving" is not the point, because the Powerstore is a global thinprovisioning device, but from the management point of view, a lower count of volumes is better to handle.
Next point is, is it better to use NTFS or ReFS for the volumes that have the problem?! I am unsure, it "feels" like ReFS is less performant than NTFS in our constellation.
And somewhere i read that disabling Hyperthreading could help, any confirms?
Thanks!
ChristineAlexa
Enthusiast
Posts: 52
Liked: 12 times
Joined: Aug 26, 2019 7:04 am
Full Name: Christine Boersen
Contact:

Re: Windows Server 2019 Hyper-V VM I/O Performance Problem

Post by ChristineAlexa »

(I was the one who discussed the Hyperthreading helping).

Though HT is a partial LAST RESORT fix, the losing HT is WAY WAY too big an issue in the long run. I quit needing that fix a few years ago. It's your "last restore" option.

Much less drastic things, all of which improve one aspect or another of our backups over the years were
- Not mixing SAS and SATA on the same bus. 12th gen dell SAS6 we a little sensitive (putting a 12Gbps HBA controller onto the 6Gpbs backplane solves the mixture issue on the 12th gen backplane.- We had a few older 6Gbps disk shelves that were sensitive to this as well.
- Not mixing "Spinning rust" with "SSD" on the same bus, without testing . Same issues as with SAS/SATA mixture. Some bus's/shelves are sensitive to the mixing (and some explicitly tell you not to, appear to work, then you end up with randomish errors UNDER LOAD)
- Ensure you don't have too many I/O threads on the Backup store *OR* the source volumes. You *WILL* timeout a workload if those are set too high on S2D.
- pay attention to the latency control thresholds in Veeam. SSD and especially Fast NVMe, need latencies MUCH lower than you would thing (like 5,10 ms at most)


Additionally,
Moving to Server 2022 had some OS improvements as time went on, and then Server 2025 server helped as the machines were upgraded as those OS's came out (within 6 months of GA)


Hope that helps.
MarcK
Influencer
Posts: 11
Liked: never
Joined: Nov 12, 2021 3:03 pm
Full Name: Marc K.
Contact:

Re: Windows Server 2019 Hyper-V VM I/O Performance Problem

Post by MarcK »

Ah OK. We have an all flash iSCSI SAN and the Veeam Server is having 12Gps SAS disks. All is pretty new, Q1/2025. Because the project was started last year and closed, Windows 2025 was not to get at Dell and they was not able to offer us an update for the project later and my budget did not allow an "external" update :-(.

Yesterday i give the Agent Backup an other try... strange there, in the strating time ~2-3 minutes, also the latency goes high ~10.000ms but then comes down from allone for the rest of the backup.
Post Reply

Who is online

Users browsing this forum: No registered users and 7 guests