For starters- this is a job I have setup to run on about 43 VM's every Saturday, and it is set to keep 4 retention points.
I first edited this job back on Wednesday or Thursday to remove a single VM from the selection that was no longer needed. I then changed the setting for how long to keep a deleted VM data to 1 day, since I didn't care to keep that 1 old VM's data at all. I expected it would get removed from the backup on Saturday.
On Saturday before the scheduled time for the job, I again edited it, this time to add a new VM to the selection list. Next, I manually started the job.
Immediately I realized that I had forgotten to set an exclusion on that new VM for it's 3rd vmdk disk. So I right-click and stop the job so I can go in and fix that in the job properties. But the job just sits there saying "stopping" for a really long time. I go off to take care of some other things and come back to check on it a few minutes later. It is *still* saying "stopping", so I right-click to get the realtime statistics, and to my horror it is going through every single VM saying, "VM 'xxxxxx' is outdated and will be deleted".
It finishes a few seconds later before I can do anything about it (not that I could have), and sure enough, all retention points for every VM in the job is deleted. I go to the repository folder for that job which confirms it, as it is basically an empty folder minus a small .vbm. That's compared to the 2TB+ of VBK/VBR files that were there previously there.
Anyone have any clues? I have attached a screenshot of it deleting all the VM's. The machine "XPTemp2" at the top of the list is the only VM that I removed from the job selection list. And I know I didn't accidentally remove all 43 VM's or something, because as soon as it was finished nuking all of my data, I just started it right up again and it ran fine.
