-
- Service Provider
- Posts: 374
- Liked: 123 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
All incremental RPs are dependent on missing RPs but backup/restore keeps working and Validator finds no errors
Yesterday I discovered to my horror that one forever incremental job had all incremental RPs for almost all VMs in state "This backup file is dependent on an unavailable backup file" for several days while new incremental backups kept running without errors. To be on the safe side I immediately started an Active Full and investigated further.
Now there's the strange thing - I can restore from restore points that are in error state without errors - this should be impossible according to this https://helpcenter.veeam.com/archive/ba ... point.html I also ran Veeam.Backup.Validator.exe on various VMs in this job (targeting restore points in error state) and there are no errors.
Also when checking job history, there were no backup sessions that could be missing. Example (backup runs every 6 hours), the oldest entries in backup chain.
19.08.2019 18:00 incremental backup --> "This backup file is dependent on an unavailable backup file"
19.08.2019 12:00 incremental backup --> Full (merged) backup, end of chain
There were no backup sessions between these two points so I don't understand what could "19.08.2019 18:00" depend on other than restore point at "19.08.2019 12:00".
Could this be some kind of metadata error that can be fixed?
Case # 03784688
Now there's the strange thing - I can restore from restore points that are in error state without errors - this should be impossible according to this https://helpcenter.veeam.com/archive/ba ... point.html I also ran Veeam.Backup.Validator.exe on various VMs in this job (targeting restore points in error state) and there are no errors.
Also when checking job history, there were no backup sessions that could be missing. Example (backup runs every 6 hours), the oldest entries in backup chain.
19.08.2019 18:00 incremental backup --> "This backup file is dependent on an unavailable backup file"
19.08.2019 12:00 incremental backup --> Full (merged) backup, end of chain
There were no backup sessions between these two points so I don't understand what could "19.08.2019 18:00" depend on other than restore point at "19.08.2019 12:00".
Could this be some kind of metadata error that can be fixed?
Case # 03784688
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: All incremental RPs are dependent on missing RPs but backup/restore keeps working and Validator finds no errors
Hello,
As I understood from your support case, some backups were removed from the repository so the "This backup file is dependent on an unavailable backup file" messages are expected.
I would suggest to keep investigating with the support specialist.
Thanks!
As I understood from your support case, some backups were removed from the repository so the "This backup file is dependent on an unavailable backup file" messages are expected.
I would suggest to keep investigating with the support specialist.
Thanks!
-
- Service Provider
- Posts: 374
- Liked: 123 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
Re: All incremental RPs are dependent on missing RPs but backup/restore keeps working and Validator finds no errors
Removal was suggested by support specialist but I'm quite sure that nothing was deleted.
I also found out what is missing - VBK files that have timestamp 5 months old (we keep roughly 35 days of backups) and that don't show up in per-VM view. However the oldest "real" VBK still exists and is read during tests (as seen in resource monitor).
I guess that due to some error during compact (VBK still has .temp extension both on disk and in Veeam GUI, seems to occur occasionally), some backup chain metadata (that is shown in GUI) was never repointed to new file. Some other metadata knows the correct file and has been doing backups/compacts/merges for months without errors.
I also found out what is missing - VBK files that have timestamp 5 months old (we keep roughly 35 days of backups) and that don't show up in per-VM view. However the oldest "real" VBK still exists and is read during tests (as seen in resource monitor).
I guess that due to some error during compact (VBK still has .temp extension both on disk and in Veeam GUI, seems to occur occasionally), some backup chain metadata (that is shown in GUI) was never repointed to new file. Some other metadata knows the correct file and has been doing backups/compacts/merges for months without errors.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: All incremental RPs are dependent on missing RPs but backup/restore keeps working and Validator finds no errors
Full backups may have their own (GFS) retention.
Indeed that can be a bug. In any case it will be faster and easier to investigate the issue with support as we can only guess here without seeing the whole picture and logs.
Indeed that can be a bug. In any case it will be faster and easier to investigate the issue with support as we can only guess here without seeing the whole picture and logs.
-
- Service Provider
- Posts: 374
- Liked: 123 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
Re: All incremental RPs are dependent on missing RPs but backup/restore keeps working and Validator finds no errors
Some updates:
First problem got worse when support recommended deleting backups from configuration and reimporting. This effectively broke the job totally, configuration restore did not improve the issue.
Escalation determined that original problem was indeed metadata error in database. Root cause could not be determined because problem had started earlier than the oldest logs. Escalation engineer suggested that this problem might be improved in v10 (auto-heal database inconsistency during health checks).
Re-import breaking job is still being investigated.
First problem got worse when support recommended deleting backups from configuration and reimporting. This effectively broke the job totally, configuration restore did not improve the issue.
Escalation determined that original problem was indeed metadata error in database. Root cause could not be determined because problem had started earlier than the oldest logs. Escalation engineer suggested that this problem might be improved in v10 (auto-heal database inconsistency during health checks).
Re-import breaking job is still being investigated.
Who is online
Users browsing this forum: Amazon [Bot], Bing [Bot], NightBird and 177 guests