The "data processed" of my daily backup (rev.incr.) increased in size dramatically over the last night.
I added a new VM to the Hyper-V host which is around 600 GB (332 GB used) in size, but the data processed went from 2.7 TB the day before to 6.2 TB this day.
The incremental backup therefore takes around 3 times as long as before.
How is it that the data processed increased this dramatically only because of a new VM that is quite small?
Day 1:
Day 2:
EDIT:
I think I found it: The job was preforming the backup files health check. Therefore it needs to process the whole VM data, not only the "data used". Right?
I checked with the dev team that health-check doesn't change processing counter.
I'd recommend you to contact our support team for a closer look at your issue.
Hi Steve,
I'm seeing that you have added a VM to your job. If you look at the Success counter, it went from 15 VMs to 16 VMs.
Could it be that your newly added VM being backed up is 3.5TB in size?
Thank you for your reply. But if you check my initial post I wrote that the new VM is around 600 GB in size, of which around 300 GB is used space.
No, this is not the case sadly.
Doh! My bad. The second sentence just flew over my head as I was reading.
Just doing a breakdown, 'read' data is correct in that VM has only used up 332GB, but I'm noticing your transfer dedupe/compression ratio drastically went down.
It still doesnt explain the increased 'processed' VM size, but perhaps it is good idea for you to check each VM one by one and see if one or more of them have unwanted .vhd(x) disk attached to them.
Just a thought.
Thank you for following up. No, I didn't log a support case. I observed the backups and found that all is correct even the final backup size. Therefore I conclude that all is well and this isn't a problem.
Do you still want me to open a support case? I can do that, no problem.
Just so I understand it correctly. All is well with the backups. But the data shown in the UI is still incorrect. Is that the case?
Then yes, I would like a support call (low priority since everything works anyway) but if we can find why there is a wrong size (might be a small bug or something) then we can also fix it in the next update or release
But obviously, your decision if you want to spend some time on it or not
First, I'd like to re-write / re-express the inital error:
I found that the value in "data processed" is actually the overall size of all my 16 VMs including free space. So this seems to be correct.
What I wonder is why it has shown only a percentage of this (2.7 TB instead of 6.2 TB now) before I have added the most recent VM, which is only around 600 GB in size including free space.
Still don't have a clue? Ok then, I'll open a support case.