I’ve noticed a disparity between information compiled by Veeam One Reporter vs Veeam One Monitor & vCenter. An example:
These are all from the same time period, on the same server. (Slight discrepancies may be present depending how each system decides ‘the last week’ is determined. That said, the data in vCenter and Monitor roughly matches up – it sees peaks to near 100% CPU usage.
The data from reporter looks very different though. The peaks only appear to go to around 50%. This discrepancy can be seen from the respective min/max values also. vCenter and Monitor are around 95% max, but Reporter only shows 48%.
I have verified the same discrepancy exists for other servers also, though not always to the same extend (i.e. not necessarily ~double. In some cases it’s the difference between 60% and 85% or similar).
Surely Monitor and Reporter should both be getting their data from the Veeam One database, so is this just a scale issue with Reporter? The peaks and troughs look roughly the same, just the numbers are off.
I put this request to Veeam support and was told that this is expected behaviour, as reporter uses 2hr aggregation graph points. The most baffling part of this is that the min/max numbers under the graphs are also aggregated in this way.
What use is a graph whose numbers you cannot trust because it is so widely averaged as to smooth out peaks? Surely this can't be intended behavior...