Thanks for the reply. So really that should mean that in this case, the 20 second interval averaged out by V/Monitor is more correct? We have parties (apps vs ops) now arguing for which value should be considered more correct, i.e. the apps guys are saying that the 5 min interval represents a larger sample so is therefore more representative (like a election poll). We on the other hand say that there's more 20 secs averages stored over the same period, so therefore it's V/Monitor that's providing the more realistic results. I know you're a little biased, but do you agree with the latter interpretation. I suppose it all depends on how the representative data is stored long term.
To put it another way... in you're example above you say to multiply the 20 sec result by 15 to arrive at a similar result to the 5 min average. I'm sure you were just generalising, but can you confirm that the more correct way of doing that calc would be to, add up 15 individual "chunks" of 20 sec intervals. So therefore you'd really need divide both of them and arrive at a figure which is closer to V/Monitor anyway.
Thanks for your help. As I mentioned, I'm really trying to argue in favour of the V/Monitor results so I really appreciate any input that supports this.