No, not immediately. The first check was ok.
Here is what happened exactly:
- Backup copy job mirroring backup from the primary site (last active full ~ 4 weeks ago, health check yesterday) to a remote site. Remote site had a Dell MD3600f (RAID 6 config) as backup storage.
- Last week this MD3600f had an strange warning that a disk in in per-failure state.
- My co-worker sadly replaced the wrong disk from the same RAID 6. We waited for the raid rebuild to finish before we replaced the real defective disk. My theory is that the first rebuild was done with the data from the partly dead disk
- After all the rebuilds we saw corruption on nearly all health checks we did on the remote site
- Since we had a free Hitachi HUS110 on the main site we used temporary backup copy jobs to create a new backup seed on that storage
- We then brought that storage system to the remote site, created & rescanned the repo, disabled the copy jobs, removed the old, corrupt backups copied from configuration, targeted the copy jobs to the new repo and mapped the backups
- All copy jobs did an initial health check without errors
- After two days (and the first merge) another health check showed a corruption in the same VM file again as on the old storage
- A heath check on the primary site showed no issues with the primary site backups
The only thing i can think of if that Veeam tried to "heal" the corrupt blocks on the remote site after the backup target was replaced (we kept the same jobs).