this is what support gave me.
In short, this is the algorithm currently used for Time Period on Counter-based Alarms:
1.Pick a counter-based alarm.
2.Take note of all alarms that use this very counter and find the one with the lowest Time Period value set.
3.In the registry editor's Veeam ONE key find UsageTriggerIntervalAccuracy value.
4.Divide the lowest Time Period value set from p.2 by UsageTriggerIntervalAccuracy value from p.3 to get the time period used for measuring the average counter value.
5.Each alarm with the counter in question uses a certain amount of averaged points calculated in p.4 that corresponds to the Time Period value in a specific alarm (there may be more that one as per p.1) and calculates the average of them in it's turn.
6.If the average value out of those averaged points exceeds threshold - the alarm is triggered, if not - the alarm is not triggered.
7.So, regardless of what Time Period is set for the alarm, the condition will be checked every point in time calculated in p.4. The Time Period setting only affects a certain amount of intermediate averaged points, that are used for calculating the final average value with the Time Period set and is compared with the threshold set (In other words, the Time Period setting is not the delay that affects alarm triggering, but a period that sets the amount of averaged points as per p.4).
I doublechecked with modelling and there the alarm-rule is working as expected.
Only in real world there seems to be a bug, that there is only taking the last 15 minutes as sampling value.
Can you tell support to give me the hot fix?