-
- Influencer
- Posts: 10
- Liked: never
- Joined: Aug 12, 2009 5:05 pm
- Full Name: Andy
- Contact:
utilization >1hr flattening min/max/avg values
Love the product, love how simple it is to connect into a vcenter and tie in with veeam backup etc etc.
However.. a great example of why its not practical for reporting past 1 hour.
we have a box.. cpu spikes every 10 minutes to 100% for 2 minutes then goes back down to <5%.
with a 1 hour view things are pretty much spot on:
Latest: 1.3%
Average: 9.2%
Max: 100%
However, lets look at the whole day.
Latest: 1.3%
Average: 5.11%
Max: 33%
Even going back 2 hours, and anything past 1 hour gets totally flattened! So its shaving off all my spikes, which are very important to know! is there any way to have this not happen? Im using a tool called Zabbix on this VM and I can stretch this out to 1 year and I still see the Maximum at 100%.
Even setting the custom time to 1 hour interval, 2 hours back, things are flattened. I understand the need to consolidate counters, but averaging them can really take the visibility away from those of us looking at getting real min/max/average trend data greater than 1 hour.
However.. a great example of why its not practical for reporting past 1 hour.
we have a box.. cpu spikes every 10 minutes to 100% for 2 minutes then goes back down to <5%.
with a 1 hour view things are pretty much spot on:
Latest: 1.3%
Average: 9.2%
Max: 100%
However, lets look at the whole day.
Latest: 1.3%
Average: 5.11%
Max: 33%
Even going back 2 hours, and anything past 1 hour gets totally flattened! So its shaving off all my spikes, which are very important to know! is there any way to have this not happen? Im using a tool called Zabbix on this VM and I can stretch this out to 1 year and I still see the Maximum at 100%.
Even setting the custom time to 1 hour interval, 2 hours back, things are flattened. I understand the need to consolidate counters, but averaging them can really take the visibility away from those of us looking at getting real min/max/average trend data greater than 1 hour.
-
- VP, Product Management
- Posts: 27357
- Liked: 2788 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: utilization >1hr flattening min/max/avg values
Hello Andy,
Thanks for the feedback.
Veeam Monitor is mostly used for real-time troubleshooting, that is why you see spikes in the last hour reports only. Just out of curiosity could you tell me what time interval would you like to use for keeping non-aggreagated data?
You can increase the default value with the help of support utility, just contact our technical team for the assistance with this.
Please note that adjusting a non-aggregated data time period will increase your database size, as more counters will be stored for a defined period of time, besides you would need more RAM on your SQL server compared to our default requirements.
Thank you!
Thanks for the feedback.
Veeam Monitor is mostly used for real-time troubleshooting, that is why you see spikes in the last hour reports only. Just out of curiosity could you tell me what time interval would you like to use for keeping non-aggreagated data?
You can increase the default value with the help of support utility, just contact our technical team for the assistance with this.
Please note that adjusting a non-aggregated data time period will increase your database size, as more counters will be stored for a defined period of time, besides you would need more RAM on your SQL server compared to our default requirements.
Thank you!
-
- Service Provider
- Posts: 47
- Liked: never
- Joined: Mar 18, 2009 1:05 am
- Contact:
Re: utilization >1hr flattening min/max/avg values
I think what Andy is trying to say is, the maximum value for the past hour is 100%, the maximum value for the past 2 hours is 100%, in fact the maximum value from present for the complete history is 100%. Just keeping the maximum value stored sholuld be possible.
I would like to see this added as well.
If you resolve the issue (say ten days ago) then obviously the max value for the past week wouldn't report it, but the maximum value for the past month would. This can also help to show where improvements have been found.
Maximum's are maximums,
No issues with Averages
The only real problem I can see is, you get
I would like to see this added as well.
If you resolve the issue (say ten days ago) then obviously the max value for the past week wouldn't report it, but the maximum value for the past month would. This can also help to show where improvements have been found.
Maximum's are maximums,
No issues with Averages
The only real problem I can see is, you get
-
- VP, Product Management
- Posts: 27357
- Liked: 2788 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: utilization >1hr flattening min/max/avg values
Well...saving maximum values for a defined period of time will cause a DB growth and will require more system resources, but anyway thanks for the feedback, we'll see what we can do here.
-
- Influencer
- Posts: 10
- Liked: never
- Joined: Aug 12, 2009 5:05 pm
- Full Name: Andy
- Contact:
Re: utilization >1hr flattening min/max/avg values
I found a neat little utility called "SvcTuning.exe" - is that where I would increase some value to get say 24 hours of un-trended/unaveraged performance data? Which setting do I increase? I was looking for something with the value of 3600 but I dont see it. Is there any documentation for this executable perhaps? I see some settings that look interesting
As far as performance impact - I have a dedicated physical machine watching ~50VM's on 1 vcenter with 6 hosts - 4 CPU with 150GB disk so im not sure I should be worried if i want to get 24 hours of "pure" data right?
As far as performance impact - I have a dedicated physical machine watching ~50VM's on 1 vcenter with 6 hosts - 4 CPU with 150GB disk so im not sure I should be worried if i want to get 24 hours of "pure" data right?
-
- VP, Product Management
- Posts: 27357
- Liked: 2788 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: utilization >1hr flattening min/max/avg values
Andy,
Congratulations, you have just found our internal support utility for Veeam Monitor
There is no documentation for this tool, as every change should be performed under our technical guys guidance, so please contact them directly for assistance.
Thank you!
Congratulations, you have just found our internal support utility for Veeam Monitor
There is no documentation for this tool, as every change should be performed under our technical guys guidance, so please contact them directly for assistance.
Yes, that should be fine, though you would need at least 4 GBs of RAM on this server as well.alubel wrote:As far as performance impact - I have a dedicated physical machine watching ~50VM's on 1 vcenter with 6 hosts - 4 CPU with 150GB disk so im not sure I should be worried if i want to get 24 hours of "pure" data right?
Thank you!
-
- Novice
- Posts: 5
- Liked: 1 time
- Joined: Dec 16, 2015 12:55 pm
- Full Name: Michael Borchers
- Contact:
[MERGED] veeamONE strange performance values
Hello,
I have installed veeamONE 2 days ago to monitor our companies ESXi Cluster and B&R Server. It seems like all is properly configured and working fine, but when i saw the performance views I noticed strange values. For example the maximum Datastore I/O Usage value for a virtualmaschine "x" for the past day is 1000. Now if I switch to the past week view, the maximum is just 200. Switch to past month view and the maximum is 50, and so on. In my world the maximum schould be still 1000, right? I got this behavior in all my performances views.
We are using Version 8.0.0.1569 ...
I have installed veeamONE 2 days ago to monitor our companies ESXi Cluster and B&R Server. It seems like all is properly configured and working fine, but when i saw the performance views I noticed strange values. For example the maximum Datastore I/O Usage value for a virtualmaschine "x" for the past day is 1000. Now if I switch to the past week view, the maximum is just 200. Switch to past month view and the maximum is 50, and so on. In my world the maximum schould be still 1000, right? I got this behavior in all my performances views.
We are using Version 8.0.0.1569 ...
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: veeamONE strange performance values
Hello Michael,
Your reasoning is correct. Let me explain how Veeam ONE aggregates performance values.
It collects 20-second samples from ESXi and shows it during the day or so(depends on the settings), then it aggregates 20-second values to 5-minutes intervals and shows the average value and so on up to 2-hour averages. At the same time Veeam ONE remembers the max. sample value, which was 1000 in your case.
Thanks!
Your reasoning is correct. Let me explain how Veeam ONE aggregates performance values.
It collects 20-second samples from ESXi and shows it during the day or so(depends on the settings), then it aggregates 20-second values to 5-minutes intervals and shows the average value and so on up to 2-hour averages. At the same time Veeam ONE remembers the max. sample value, which was 1000 in your case.
Thanks!
-
- Novice
- Posts: 5
- Liked: 1 time
- Joined: Dec 16, 2015 12:55 pm
- Full Name: Michael Borchers
- Contact:
Re: utilization >1hr flattening min/max/avg values
Thanks for merging!
So, I have the same problem. I also think a maximum should be always a maximum. The answer "veeam Monitor ist mostly used for real-time troubleshooting" is a bit to less for me and is not what your advertisement promise in my opinion. Seems like I can't find this little support utility, but I'll contact your technical team
So, I have the same problem. I also think a maximum should be always a maximum. The answer "veeam Monitor ist mostly used for real-time troubleshooting" is a bit to less for me and is not what your advertisement promise in my opinion. Seems like I can't find this little support utility, but I'll contact your technical team
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: utilization >1hr flattening min/max/avg values
Michael, you can modify registry to postpone those aggregations, but you can`t get rid of them, since it will overfill your database at once.
You still have historical data and aggregation looks reasonable here.
Could you specify a kind of behavior you expected from Veeam ONE here? Thanks!
You still have historical data and aggregation looks reasonable here.
Could you specify a kind of behavior you expected from Veeam ONE here? Thanks!
-
- Novice
- Posts: 5
- Liked: 1 time
- Joined: Dec 16, 2015 12:55 pm
- Full Name: Michael Borchers
- Contact:
Re: utilization >1hr flattening min/max/avg values
I would expect always a real maximum and not the maximum auf the averages. Like descriped in this topic before. Actuall veeamONE is flattening the values.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: utilization >1hr flattening min/max/avg values
I see your point, Michael.
We do keep the max. value of each counter. If it`s above the defined threshold, a corresponding alarm will be triggered and show you the value.
Another way is to generate the Custom performance report or its Raw version with 5-minutes values and min max in the end.
Thanks again for the feedback, we will think over the issue.
We do keep the max. value of each counter. If it`s above the defined threshold, a corresponding alarm will be triggered and show you the value.
Another way is to generate the Custom performance report or its Raw version with 5-minutes values and min max in the end.
Thanks again for the feedback, we will think over the issue.
-
- VP, Product Management
- Posts: 27357
- Liked: 2788 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: utilization >1hr flattening min/max/avg values
Can you please tell us what minimum interval for this value should be used, 1 day or something else? If VM reboots, then it maximizes CPU usage to 99%, so keeping this value according to the retention policy might be an overkill.michaelB wrote:I would expect always a real maximum and not the maximum auf the averages. Like descriped in this topic before. Actuall veeamONE is flattening the values.
-
- Enthusiast
- Posts: 83
- Liked: 9 times
- Joined: Oct 31, 2013 5:11 pm
- Full Name: Chris Catlett
- Contact:
Re: utilization >1hr flattening min/max/avg values
I will add this comment, the way averaging is computed, gives false data with the undersized/oversized vm reports.
-
- Novice
- Posts: 5
- Liked: 1 time
- Joined: Dec 16, 2015 12:55 pm
- Full Name: Michael Borchers
- Contact:
Re: utilization >1hr flattening min/max/avg values
Ok, I'll just try to explain what I'm doing with veeamONE actually.
We have a new VMware infrastructur with new Servers(ESXi hosts) and new datastores. The datastores have different performance features. The first half of them have a really good IOPS performance and average throughput. The other half is the opposite and have really good throughput performance and averagy IOPS. Now I'm analyzing with veeamONE which datastore is the best for each VM. It would be easy to say "store all database VMs to the first datastore and all like filetransfer VMs to the second", but our case is not that simple. Our VMs are performing differently and often they use their full ressources only for a short period of time(5min-2hours). Even if its a short period of time it is really important for us that the VMs get as much performance as they can get in this time.
Now, if I use veeamONE i need a long period of time (1-2 weeks) to fully analyze the VMs. For example, my threshold to choose one of the good perfoming IOPS datastores is >750 IOPS and the chart would show me it is more than one peak. VeeamONE tells me now for one VM "Datastore IO usage maximum 200 and multiple peaks at 200 IOPS - past week", so i would choose one of the other Datastores whith lower IO performance. But this information is simply wrong, cause in reality the peaks are about 1000 IOPS and I would choose the wrong Datastore.
Now luckily I know this behavior. Strangly if I analyze with a custom interval only the peak, it seems like I get the correct maximum value (>1000, 2 days ago). So the correct data seems to be stored but not to be shown in the past week view. I could now analyze each peak for a big amount of VMs and different performance types, but this would be really uncomfortable.
Veeam and VeeamONE is new for me and may be I'll get the informations which I need if I know more about the reporting features.
I'll go on with reading the documentation.
Thanks for your support.
We have a new VMware infrastructur with new Servers(ESXi hosts) and new datastores. The datastores have different performance features. The first half of them have a really good IOPS performance and average throughput. The other half is the opposite and have really good throughput performance and averagy IOPS. Now I'm analyzing with veeamONE which datastore is the best for each VM. It would be easy to say "store all database VMs to the first datastore and all like filetransfer VMs to the second", but our case is not that simple. Our VMs are performing differently and often they use their full ressources only for a short period of time(5min-2hours). Even if its a short period of time it is really important for us that the VMs get as much performance as they can get in this time.
Now, if I use veeamONE i need a long period of time (1-2 weeks) to fully analyze the VMs. For example, my threshold to choose one of the good perfoming IOPS datastores is >750 IOPS and the chart would show me it is more than one peak. VeeamONE tells me now for one VM "Datastore IO usage maximum 200 and multiple peaks at 200 IOPS - past week", so i would choose one of the other Datastores whith lower IO performance. But this information is simply wrong, cause in reality the peaks are about 1000 IOPS and I would choose the wrong Datastore.
Now luckily I know this behavior. Strangly if I analyze with a custom interval only the peak, it seems like I get the correct maximum value (>1000, 2 days ago). So the correct data seems to be stored but not to be shown in the past week view. I could now analyze each peak for a big amount of VMs and different performance types, but this would be really uncomfortable.
Veeam and VeeamONE is new for me and may be I'll get the informations which I need if I know more about the reporting features.
I'll go on with reading the documentation.
Thanks for your support.
-
- VP, Product Management
- Posts: 27357
- Liked: 2788 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: utilization >1hr flattening min/max/avg values
ccatlett1984 wrote:I will add this comment, the way averaging is computed, gives false data with the undersized/oversized vm reports.
Over-sized/under-sized reports are using buffers of 30% to address this, so that every performance spike (if it happens on regular basis) will be "accounted" in the recommendation.
That's a perfect usage scenario for Veeam ONE. Automating VM placement or giving recommendations on where to put VM sounds like a good feature request. For your analysis I would also recommend to review our datastore performance assessment report that would correlate latency, IOPs and write/read rate over the historical time period.michaelB wrote:Ok, I'll just try to explain what I'm doing with veeamONE actually.
We have a new VMware infrastructur with new Servers(ESXi hosts) and new datastores. The datastores have different performance features. The first half of them have a really good IOPS performance and average throughput. The other half is the opposite and have really good throughput performance and averagy IOPS. Now I'm analyzing with veeamONE which datastore is the best for each VM. It would be easy to say "store all database VMs to the first datastore and all like filetransfer VMs to the second", but our case is not that simple. Our VMs are performing differently and often they use their full ressources only for a short period of time(5min-2hours). Even if its a short period of time it is really important for us that the VMs get as much performance as they can get in this time.
Now, if I use veeamONE i need a long period of time (1-2 weeks) to fully analyze the VMs. For example, my threshold to choose one of the good perfoming IOPS datastores is >750 IOPS and the chart would show me it is more than one peak. VeeamONE tells me now for one VM "Datastore IO usage maximum 200 and multiple peaks at 200 IOPS - past week", so i would choose one of the other Datastores whith lower IO performance. But this information is simply wrong, cause in reality the peaks are about 1000 IOPS and I would choose the wrong Datastore.
Now luckily I know this behavior. Strangly if I analyze with a custom interval only the peak, it seems like I get the correct maximum value (>1000, 2 days ago). So the correct data seems to be stored but not to be shown in the past week view. I could now analyze each peak for a big amount of VMs and different performance types, but this would be really uncomfortable.
Veeam and VeeamONE is new for me and may be I'll get the informations which I need if I know more about the reporting features.
-
- Influencer
- Posts: 23
- Liked: never
- Joined: Nov 29, 2015 6:04 pm
- Full Name: RisingFlight
[MERGED] Datastore IO
Hi everyone,
On one of my VM i can see Datastore IO when i set the interval for last one hour.
Obect Counter Units Latest Minimum Average Maximum
VM1 Datastore I/O Number 30 16 168 5300
When i set the interval to one day, the maximum value is changed it comes to around 1600.
I am confused here with the chart option of maximum. From where is it taking this maximum value,
i belive sometime back it might have reached 5300 but in last one hour interval it did not reach to 5300 and for one day hows it showing 1600.
Experts exlain me clearly
On one of my VM i can see Datastore IO when i set the interval for last one hour.
Obect Counter Units Latest Minimum Average Maximum
VM1 Datastore I/O Number 30 16 168 5300
When i set the interval to one day, the maximum value is changed it comes to around 1600.
I am confused here with the chart option of maximum. From where is it taking this maximum value,
i belive sometime back it might have reached 5300 but in last one hour interval it did not reach to 5300 and for one day hows it showing 1600.
Experts exlain me clearly
-
- VP, Product Management
- Posts: 27357
- Liked: 2788 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: utilization >1hr flattening min/max/avg values
Hi,
Currently it is expected behavior due to way how historical data is stored in the database for long time reporting. Please check out this thread for more info.
Thanks!
Currently it is expected behavior due to way how historical data is stored in the database for long time reporting. Please check out this thread for more info.
Thanks!
-
- Influencer
- Posts: 23
- Liked: never
- Joined: Nov 29, 2015 6:04 pm
- Full Name: RisingFlight
[MERGED] Experts explain me about Disk I/O
I am using Veeam 9.
I have select Datastore and clicked on Tab Disk I/O
Under Chart Options : Stack by VMs, Chart view : Datastore I/O ,
For one VM i have selected Period : Past hour, I could see
Latest 700 Minimum 0 Average 63 Maximum 700
For the same VM i changed the period to past one week. I could see
Latest 0 Minimum 0 Average 2 Maximum 190
for past one month i could see.
Latest 0 Minimum 0 Average 2 Maximum 85
How are these units calculated.
I was asked this question by my officer, for past one hour if maximum is 700, then for one month also maximum should be 700
how come it is 85. Hows veeam calculating
I have select Datastore and clicked on Tab Disk I/O
Under Chart Options : Stack by VMs, Chart view : Datastore I/O ,
For one VM i have selected Period : Past hour, I could see
Latest 700 Minimum 0 Average 63 Maximum 700
For the same VM i changed the period to past one week. I could see
Latest 0 Minimum 0 Average 2 Maximum 190
for past one month i could see.
Latest 0 Minimum 0 Average 2 Maximum 85
How are these units calculated.
I was asked this question by my officer, for past one hour if maximum is 700, then for one month also maximum should be 700
how come it is 85. Hows veeam calculating
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: utilization >1hr flattening min/max/avg values
Hello,
As was mentioned above in this topic:
As was mentioned above in this topic:
Please read the thread and ask additional questions if you have any. Thanks!Shestakov wrote:Veeam ONE collects 20-second samples from ESXi and shows it during the day or so(depends on the settings), then it aggregates 20-second values to 5-minutes intervals and shows the average value and so on up to 2-hour averages. At the same time Veeam ONE remembers the max. sample value
-
- Lurker
- Posts: 2
- Liked: 2 times
- Joined: Apr 26, 2017 9:03 am
- Full Name: ankl
[MERGED] Disk "Errors/min" different maximum values
Hi there,
I've deployed the free version of Veeam ONE and after several days I discovered that one of my virtual machines have intermittent virtual disk issues with spikes of high disk Errors/min values. To investigate this I started to watch virtual disk chart for this particular virtual machine. On the past hour chart period I see a spike of, say, 32 Errors/min. The very same value has been displayed on the chart legend on the Maximum column. Naturally, i switched to the past day period of chart display. What I see has puzzled me - the maximum value of Errors/min has changed to 18. I switched to past week view, and maximum value has changed again - now to 5!
Until now, I believed this field should display a maximum counter value that has been observed during particular period of time. Like every 10 minutes you check your thermometer and record temperature readings. So if the maximum temperature registered during past hour is higher, than the maximum temperature registered during past day or week, this value become a new maximum - not only for the past hour, but for any period in the past from the current point in time.
Now I'm totally lost. Could anyone explain me a concept how do you calculate this disk Errors/min maximum field values over different periods of time? Help, please.
I've deployed the free version of Veeam ONE and after several days I discovered that one of my virtual machines have intermittent virtual disk issues with spikes of high disk Errors/min values. To investigate this I started to watch virtual disk chart for this particular virtual machine. On the past hour chart period I see a spike of, say, 32 Errors/min. The very same value has been displayed on the chart legend on the Maximum column. Naturally, i switched to the past day period of chart display. What I see has puzzled me - the maximum value of Errors/min has changed to 18. I switched to past week view, and maximum value has changed again - now to 5!
Until now, I believed this field should display a maximum counter value that has been observed during particular period of time. Like every 10 minutes you check your thermometer and record temperature readings. So if the maximum temperature registered during past hour is higher, than the maximum temperature registered during past day or week, this value become a new maximum - not only for the past hour, but for any period in the past from the current point in time.
Now I'm totally lost. Could anyone explain me a concept how do you calculate this disk Errors/min maximum field values over different periods of time? Help, please.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: utilization >1hr flattening min/max/avg values
Hi ankl and welcome to the community!
The reason of the behavior is performance data averaging which is explained above.
Thanks!
The reason of the behavior is performance data averaging which is explained above.
Thanks!
-
- Lurker
- Posts: 2
- Liked: 2 times
- Joined: Apr 26, 2017 9:03 am
- Full Name: ankl
Re: utilization >1hr flattening min/max/avg values
Re-sampling the historical performance data per se is very common among monitoring/reporting products. Because of that the intermittent spikes of some values are being integrated to the smooth and more or less constant value on the historical graph. I've seen this many times myself.
But the maximum is still maximum. You could draw your graph with averaged over time data, but keep the maximum values untouched - for the record and display. Anyway, you could not "average" a maximum values. After that they become "average" data, not maximum values. And you already have the "Average" column in the performance chart legend.
In my opinion, the "Maximum" column in its current implementation is confusing, to say least; you either shouldn't resample/average maximum values over time at all or drop this column altogether.
Howbeit, thanks for the explanation.
And, BTW, great product overall, thanks!
But the maximum is still maximum. You could draw your graph with averaged over time data, but keep the maximum values untouched - for the record and display. Anyway, you could not "average" a maximum values. After that they become "average" data, not maximum values. And you already have the "Average" column in the performance chart legend.
In my opinion, the "Maximum" column in its current implementation is confusing, to say least; you either shouldn't resample/average maximum values over time at all or drop this column altogether.
Howbeit, thanks for the explanation.
And, BTW, great product overall, thanks!
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: utilization >1hr flattening min/max/avg values
I cannot disagree with you, but from the other hand if we say: "Here is the graph and that`s a maximal value", the question will be: "Why I don`t see the maximal value on the graph?"
We will think how to deal with both objections, because both look fair.
Thanks for thee kind words about the product!
We will think how to deal with both objections, because both look fair.
Thanks for thee kind words about the product!
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Apr 25, 2019 4:23 pm
- Contact:
Re: utilization >1hr flattening min/max/avg values
I would be interested to know if there is a more current status on this problem.
I've been discussing it with support for years and the answer is always: That's what we want.
I can give examples of maximum values displaying 0 in lower resolutions (monthly, weekly, etc), although there are higher values in hours and day resolution.
What good is an overview if I am not able to see the important values?
I've been discussing it with support for years and the answer is always: That's what we want.
I can give examples of maximum values displaying 0 in lower resolutions (monthly, weekly, etc), although there are higher values in hours and day resolution.
What good is an overview if I am not able to see the important values?
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: utilization >1hr flattening min/max/avg values
Hello sziehm and welcome to the community!
There were discussed several cases, which one are you interested in?
There were discussed several cases, which one are you interested in?
-
- Lurker
- Posts: 1
- Liked: 1 time
- Joined: Feb 20, 2020 10:23 pm
- Full Name: CODY NGUEN
- Contact:
Re: utilization >1hr flattening min/max/avg values
I can't believe the wiped-out peaks has not been fixed since 2016. I am evaluating Veeam One Monitor. The only data is useful is the last hour. the rest of data is useless because it does not show correct information. Why we have to save the useless log and worrry about the log getting full too fast?
I am managing 500+ VMs. Who is going to sit there to look at the last hour of 500+ VMs?
Here is the capture
Last Hour Peaks at 2623 IOPS (This is the peak when users log in in the morning)
Last Day Peaks 1104 IOPS
Last Week Peaks 236
Seriously, the main purpose of monitoring software to monitor system health and capture all anomaly. so that we can take actions but this tool wipes out all anomaly.
I am managing 500+ VMs. Who is going to sit there to look at the last hour of 500+ VMs?
Here is the capture
Last Hour Peaks at 2623 IOPS (This is the peak when users log in in the morning)
Last Day Peaks 1104 IOPS
Last Week Peaks 236
Seriously, the main purpose of monitoring software to monitor system health and capture all anomaly. so that we can take actions but this tool wipes out all anomaly.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: utilization >1hr flattening min/max/avg values
Hello Cody,
Databases still cannot handle 20sec intervals from all objects for all historical range.
You can adjust the flattening retention and keep as much data as possible.
By the way, as soon as the peak IOPs value exceeds threshold the alarm will be triggered and the values will be saved. So you don't need to log the values yourself.
Thanks
Databases still cannot handle 20sec intervals from all objects for all historical range.
You can adjust the flattening retention and keep as much data as possible.
By the way, as soon as the peak IOPs value exceeds threshold the alarm will be triggered and the values will be saved. So you don't need to log the values yourself.
Thanks
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Apr 25, 2019 4:23 pm
- Contact:
Re: utilization >1hr flattening min/max/avg values
However, the problem of incorrect display also occurs if the max values are still contained in the database. Nevertheless, an average value of the maximum value is calculated.
Example:
Max value for MB/s of the last hour: 22.23
Max value for MB/s of the last 2 hours: 8.33
This is not a problem of data storage in the database, but in my opinion an error in the representation, since the corresponding values are demonstrably still present.
I agree with the opinion of the previous speakers: Historical evaluations are, even if the corresponding data are still available, useless and then also wrong!
Best regards,
Stefan
Example:
Max value for MB/s of the last hour: 22.23
Max value for MB/s of the last 2 hours: 8.33
This is not a problem of data storage in the database, but in my opinion an error in the representation, since the corresponding values are demonstrably still present.
I agree with the opinion of the previous speakers: Historical evaluations are, even if the corresponding data are still available, useless and then also wrong!
Best regards,
Stefan
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Who is online
Users browsing this forum: No registered users and 2 guests