I have recently enabled Storage Latency Control for a customer, as we have consolidated down on to a few, large datastore and it seemed a sensible option to enable.
I have seen jobs get throttled (by having "[throttled]" appear next to the hard disk being backed up), so I know it is kicking in.
My question is: does it ever get turned off for a hard disk, after it has been turned on?
So, does the datastore latency keep being monitored, and when it drops below the monitor threshold again, does Veeam then "release" the throttle and allow the VM to be backed up at full speed?
The reason I ask is that the backups contain a mix of VMs: some small, some large (file servers, etc.). I have seen the file servers get throttled during the initial backup process, but when all the other VMs are backed up and the file servers are the only ones being backed up, they appear to stay throttled until they complete. If they are doing full backups (a few HA events due to power issues have disrupted CBT, for example), then they take a very long time to complete, even though looking at datastore latency values in VMware itself shows values of less than 5ms.
So I have the feeling that once storage latency control kicks in for a VM's hard disk, it doesn't ever get released.
The documentation only says that latency is monitored every 20 seconds through the hypervisor itself.
One side note: Veeam ONE 9.5 had an issue (fixed in Update 1) whereby it displayed the incorrect value for datastore latency (it's in the release notes: "Datastore latency performance counter is showing wrong measurement units"). Does Veeam B&R use the same codebase for this, and if so is the same issue present (or absent) in Veeam 9.5 B&R (with or without Update 1)? I ask because I have seen throttling kick in when the datastore isn't close to the latency value specified in the settings, and might explain the behaviour I am seeing. This customer was receiving floods of datastore latency alerts from Veeam ONE after upgrading to 9.5, so I had to turn that alarm off (as it was clearly not showing correct values); it's back one after upgrading to Update 1 and seeing the issue fixed in the release notes