Host-based backup of KVM-based VMs (Red Hat Virtualization, Oracle Linux Virtualization Manager and Proxmox VE)
Post Reply
nimda
Influencer
Posts: 15
Liked: 2 times
Joined: Oct 08, 2024 10:23 am
Contact:

Veeam Worker high IO-delay/IO-wait, huge amount kvm processes

Post by nimda »

I use a hardware SAS-Raid controller with RAID10. When the VeeamWorker is beeing prepared for backup or restore, the IO-delay/IO-wait rises extremely sometimes up to 90% and over a long period for up to 20 minutes. In this phase there are running a huge amount of kvm processes (iotop).

Here is described, what high IO-wait causes "https://www.site24x7.com/learn/linux/tr ... -wait.html" and advices: "Reduce the frequency of disk reads and writes by reducing I/O operations such as database queries."

So, I would suggest an option for limiting the kvm processes for the VeeamWorker VM.
nimda
Influencer
Posts: 15
Liked: 2 times
Joined: Oct 08, 2024 10:23 am
Contact:

Re: Veeam Worker high IO-delay/IO-wait, huge amount kvm processes

Post by nimda »

I could imagine an automatic system that regulates the number of kvm-processes in such a way that the IO-waits cannot become more than, for example, 15%. Higher IO-waits delay the other VMs on the virtualisation server too much or even freeze the entire virtualization server.
nimda
Influencer
Posts: 15
Liked: 2 times
Joined: Oct 08, 2024 10:23 am
Contact:

Re: Veeam Worker high IO-delay/IO-wait, huge amount kvm processes

Post by nimda »

I observed, that the mount option "data=writeback" in fstab was the major reason for the much too high IO-waits. I got "Buffer I/O error on dev xxx, logical block yyy lost async page write" in dmesg.
After removing this mount option, the IO-waits went back to a short peak of 25%. Nevertheless I mean, the number of kvm-processes should be regulated and limited as described in my post one above.
Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest