TimeKnight wrote:I know opening a case will just result in them asking me to run a full backup, which will band-aid the problem but does not really correct it.
Running an active full is typically given as the only currently available workaround, however, to detect the actual reason of your issue and be able to deeply investigate and address it, support team needs your debug logs. So opening a case is highly appreciated. If you are not satisfied with the given workaround, you can always ask to escalate the case for further investigation.
TimeKnight wrote:since active fulls are not transformed when a synthetic backup is preformed.
This is not 100% true as the most recent active full should be transferred to rollback file. All preceding full backups should stay intact.
Well support may need to get the debug logs but they did not ask for them from me.
All they did in broken English was say oh this is a known issue you have to run active fulls to fix it. Oh and some times you have to delete the database entry of the backups to get the fulls to run. Or you can create new jobs.
I have now had 3 out of my 6 backups fail with this same error and that leaves me wondering when the ret will fail.
I have to say this support incident really leaves me disappointed.
bernardw wrote:I have to say this support incident really leaves me disappointed.
Well, you always have the ability to involve support management if you feel your case is not handled properly. There is the Talk to a Manager button available on the support portal for that. Your feedback is what makes our support service better.
foggy wrote:
Running an active full is typically given as the only currently available workaround, however, to detect the actual reason of your issue and be able to deeply investigate and address it, support team needs your debug logs. So opening a case is highly appreciated. If you are not satisfied with the given workaround, you can always ask to escalate the case for further investigation.
I have opened a case for this before and even after uploading logs my only answer was to do the active full. Perhaps that is the only solution after the fact and what I really need to do is push to find out why this is happening.
foggy wrote:
This is not 100% true as the most recent active full should be transferred to rollback file. All preceding full backups should stay intact.
Well this does not seem to be working for me as I have 2 large VBK files for this backup job. I was also told be support that this was the case. My support person also apologized for not letting me know this before recommending the active full.
I guess I should open another case for this and escalate it.
TimeKnight wrote:Well this does not seem to be working for me as I have 2 large VBK files for this backup job.
After transform of the backup chain you should come up with the most recent full VBK (being created synthetically from the latest and all previous restore points in the chain), a set of rollbacks (including the one transformed from the base VBK) and probably some previous backup chains with their respective VBK files. Isn't that the case with your jobs?
Probably, reviewing the corresponding user guide section (p.31) will make it more clear than my explanation.
First off let me say that I just had the FIB error again this week. It seems to like happening the most with my largest backup job (1.5 TB) when it is working on doing the transform from forward incremental to transforming them into a rollback chain. I think this process ends up being quite disk and CPU intensive and as a result something blips in the midst of the process (which is strange as I have 3 other jobs happening at the same time but they all completed successfully).
As part of all the issues I am having with the FIB error I have been talking with my manager about how to setup the jobs to find the best middle ground between Disk intensive and CPU intensive jobs. At least now I see that I will have to be carful of the number of active fulls that I run, because the old version will stay around as the new one is built up.
So, I have discussed this issue internally with R&D and support.
R&D is completely convinced that the issue was fully fixed earlier and should not appear in the current version on newly created jobs (aside of actual backup file corruption due to storage issues, which happens very rarely). They want to see logs for every such occurence on the current version.
I've then discussed this with support management, and we agreed to always be collecting logs from people reporting this issue, instead of just assuming this is old known issue that is already fixed in the current code, and simply have customers re-create the job.
Please note that in any case, if the issue you are having was introduced into the backup file with the earlier version of B&R, recreating the job is indeed still the ONLY way to fix the issue. However, it is best to provide support with debug logs to confirm that what you are facing is indeed that known and resolved issue, otherwise recreating jobs may not help in the long run (in case there is another unknown bug in the current code that may lead to this error).
I am really getting pissed with this error. It now just occurred again for me when the backup location ran out of space (because of all the active fulls I had to create to get around the error). So I deleted some old backup files got more space and then retried the backup. What do you know IT FAILED AGAIN!!!!!!!!!!!!!!!!!
I will be calling in to support in the next couple minutes about this.
I am running Server 2012 with an ISCSI backup target.
I have had a few good discussions with the Veeam Tech support and I did have everything setup correctly so it is a big question mark why things broke but at least this weekend they all ran and completed successfully. Now I just have to get them to tape (Which is proving to be harder then I expected)
Yes I was using the dedup feature of Server 2012, but found that it is taking up too much hard drive space (I have a 10 TB hard drive and half of it is used by something - not backup files).
I did just turn off the dedup feature to see if things would work better and if I could reclaim some of the space that is missing.
For me, I have disabled the deduplication of the volume but it not solves the problem. During my investigation, I found that during my backup job, a defrag process took all the server memory so the job failed due to memory lack.
To solve my problem, I have disabled the defrag schedule in windows and everything works fine.
My backup target was the only one that had deduplication running so once I turned it off there was nothing more. I don't think memory was the issue as the server has 40 GB or RAM and 8 cores in it (old VMware host). But Yes after turning the dedup off I have not had any corrupted synthetic fulls.
foggy wrote:
After transform of the backup chain you should come up with the most recent full VBK (being created synthetically from the latest and all previous restore points in the chain), a set of rollbacks (including the one transformed from the base VBK) and probably some previous backup chains with their respective VBK files. Isn't that the case with your jobs?
Probably, reviewing the corresponding user guide section (p.31) will make it more clear than my explanation.
Nope that is not what is happening for me. And I looked after this weekends back (that included a transformation) and not only do I have 2 1.1TB .vbk's I also have more than 20 restore points, and my backup job is configured to keep 15!.
TimeKnight wrote:Nope that is not what is happening for me. And I looked after this weekends back (that included a transformation) and not only do I have 2 1.1TB .vbk's I also have more than 20 restore points, and my backup job is configured to keep 15!.
Brian, I'm not saying that you should not have two VBK files after transform, please read my post carefully. Anyway, it's hard to investigate without seeing actual backup files you have on repository. I suggest contacting support directly so they could review the restore points you have currently on backup storage, your job settings and see what is actually happening. Thanks!
I just do not understand any situation where I would have two .vbk files in my backup directory the day after my “transformation.” Hopefully support can help.
TimeKnight wrote:Why does the transformation not convert that previous full into a rollback?
We've discussed that previously:
foggy wrote:
This is not 100% true as the most recent active full should be transferred to rollback file. All preceding full backups should stay intact.
foggy wrote:This is not 100% true as the most recent active full should be transferred to rollback file. All preceding full backups should stay intact.
So only the most recent "full" will be transformed, is that a changeable option? In my case I had to preform an active full due to a corrupt backup chain and the only known correction/workaround is "rerunning the full backup"
I only want to ever have one "full backup" on disk to save space.
TimeKnight wrote:So only the most recent "full" will be transformed, is that a changeable option?
No.
TimeKnight wrote:In my case I had to preform an active full due to a corrupt backup chain and the only known correction/workaround is "rerunning the full backup"
Yes, that is what I was talking about.
TimeKnight wrote:I only want to ever have one "full backup" on disk to save space.
All previous fulls will be removed in time according to your retention settings and you'll come up with a single full and rollbacks.
foggy wrote:
All previous fulls will be removed in time according to your retention settings and you'll come up with a single full and rollbacks.
Unfortunately this statement is not necessarily true. If you had to delete the backup from Veeam to be able to create a new active full Veeam no longer knows about any of the past backups. And as a result you will need to delete them your self.
I have received this error as well - more than once - There is no FIB [summary.xml] in the specified restore point. Failed to restore file from local backup. VFS link: [summary.xml]. Target file: [MemFs://Tar2Text]. CHMOD mask: [0].
After contacting support, it was determined that, a backup to tape was in progress, and the file in use, (this wasn't planned, the tape backup just took longer than expected) while the Veeam backup tried to run a transform and this corrupted the entire backup chain.
The solution to rerun a full backup is just not an acceptable one. It's not a solution - it's not even a very good workaround. This backup is going offsite and therefore needed to be re-seeded (not a simple task as others with the same situation know).
So I managed to re-seed it and get the backup running again - and made sure this time that no tape backups would conflict with the Veeam backup - and then less than 2 weeks later, I ran into this issue again. This time it wasn't the tape backup, but a SureBackup job of all things that was hung and carried over into the backup window, holding the file in use. Once the backup ran, it corrupted the entire backup chain....again.
The fact that the backup file just simply being in use (read-only mind you) corrupts the entire backup chain when a transform is attempted is, to me, absolutely and utterly RIDICULOUS. And that it could happen from a SureBackup job is even worse.
A file could end up being in use for any number of reasons, and the fact that Veeam isn't able to recover from something as simple as that is just unacceptable.
I love Veeam, don't get me wrong, but such a robust product should be able to recover from this, either to just error out without causing damage, to repair the damage after the fact, or how about this, maybe just check to see if the file is in use before you go ahead and corrupt the whole thing. I know it can't be that difficult to check that the backup file isn't in use.
To the product management team, please fix this! I mean, at least check the status of the file before you go ahead and let the software corrupt the whole damn thing with no ability to repair it.
Sorry for the rant, but if this issue continues, having to reseed so often will make Veeam almost useless for us backing up offsite.
foggy wrote:Jason, just to make sure, have you recreated the job during reseed, as it is advised in this topic above?
I believe that I created the job after the upgrade to 6.5, but I need to reseed it again, so I will recreate the job just to make sure. Although, the support tech I worked with never suggested that.
Also, I believe you could use a scheduled PowerShell script to check whether the VBK file is locked prior to running transform, should address your request in some sense.
Alexander is right here. There is an existing custom PS function called File-Lock that checks whether a given file is in use or not. You can utilize it in conjunction with Windows Scheduler, so that, before the backup operation takes place this script will verify that a backup file isn’t used by any other applications: