9/4/2013 9:28:24 AM Getting virtual lab configuration
9/4/2013 9:30:04 AM Starting virtual lab routing engine
9/4/2013 9:30:22 AM Error XXX-EXCH01 - Publishing
9/4/2013 9:30:23 AM Error Error: Client error: File does not exist. File: [E:\XXX-Production\VeeamBackups\XXX-Backup_to_XXX-NAS012013-08-23T110102.vbk].
Failed to restore file from local backup. VFS link: [summary.xml]. Target file: [MemFs://Tar2Text]. CHMOD mask: [0].
9/4/2013 9:30:23 AM XXX-EXCH01 - Unpublishing
9/4/2013 9:30:23 AM Stopping virtual lab routing engine
9/4/2013 9:30:24 AM Error Error: Client error: File does not exist. File: [E:\XXX-Production\VeeamBackups\XXX-Backup_to_XXX-NAS012013-08-23T110102.vbk].
Failed to restore file from local backup. VFS link: [summary.xml]. Target file: [MemFs://Tar2Text]. CHMOD mask: [0].
9/4/2013 9:30:26 AM Job finished
9/4/2013 9:30:26 AM Sending email report
The job does not list the VM's as no longer being part of that job or being marked as deleted. That is why I contacted Veeam support because I have seen that message before on other clients but it was not on this one after the upgrade. Veeam still shows I have restore points going back to 8/9/2013 but I cannot restore anything from them. 8/28/2013 is as far back as I can restore files or vm's from after the upgrade.
Please note that deleted VMs retention period setting does not remove any files at all. All it does is marking blocks storing deleted VMs data inside the VBK. The file you attempted to restore from was deleted by retention policy (the setting that handles the number of restore points stored on disk).
I do not see that since the last backup before the restore attempt show the last backup being removed due to the retention policy was: 9/3/2013 3:40:13 PM :: Removing 'E:\XXX-Production\VeeamBackups\XXX-Backup_to_NAS _iScsi_2013-07-30T130045.vrb' per retention policy
I was attempting to restore from 8/23/2013 which would have been well within the 56 restore points set on the job when attempting to restore on 9/4/2013. I should have 28 days of restore points. instead I only had 7 (8/28/2013 was as far back as I could restore files from). What happened to all the other restore points if the last one removed was 2013-07-30T130045.vrb'
I did open a support case and this was the last response I received from Veeam support:
I appreciate your insight, and you have raised valid points.
At this time this is behavior by design, but I feel that your points deserve additional consideration. I will pass your response along to my next level, but to cover both bases you may want to raise this question in our forums as well since it is monitored by the developers.
Are there any other concerns we can address for you?
TravisP wrote:When vCenter was upgraded from 4.1 to 5.1 it changed the instance uuid of the VM’s, there was no indication of this during or after the vCenter upgrade and the only way to have seen this change was to check each VM’s UUID and compare it to the UUID after the upgrade of vCenter.
Btw, was it an in-place vCenter upgrade or a new installation?
Then IDs should not actually change. Moref IDs only change on a new instance of vCenter, not an upgrade (if registering hosts or VMs in VI). So my point is that what happened in your case should be investigated more closely as it is not that just your backups were deleted according to deleted VMs retention without any notification.
One of our engineers just did another upgrade of vCenter and Veeam. vCenter was a in-place upgrade from 4.1 to 5.1 w/update1, The ESXi Hosts were upgraded and not a clean install and Veeam had a in-place upgrade from 6.5 to 7.0. After the upgrade Veeam backups failed with the following error:
"Task failed Error: Host with uuid '37363836-3331-4d32-3232-323630305037' was not found"
Due to the hosts UUID's changing the VM's UUID's would have changed also and now the deleted VM retention policy would be applied to those vm's.
I have reviewed your case and indeed files were deleted because of the retention policy settings, but not due to deleted VMs retention. Anyway, the behavior you observe is still not expected, as upgrading ESXi hosts shouldn't change their and VMs IDs.
Out of curiosity, can you please double-check (if possible) with the engineer who did the upgrade what steps exactly did he take? I'm asking this because attaching/removing hosts from inventory also causes the ID change, and it could have been the case in your situation.