-
mkretzer
- Veeam Legend
- Posts: 1323
- Liked: 475 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
12.3 issue with rescanning for new backups in GFS jobs
Hello,
I wonder if it still makes sense to open a 12.3 ticket or wait for 13 upgrade:
After switching to LTO10 we changed our tape scheduling so our GFS jobs start while the source jobs are still running (source job creates synthetics so the tape job can then just take them). According to documentation this should lead to the tape job waiting for the backup files appear on the source job.
This "works" so far but it behaves strangely/not good: We have 3 drives writing and even a day after all synthetics are available some job components still show "Some backup files are locked by the backup job, waiting for availability..." and "No backup files found". Then, after the currently writing backup job component finishes it does the rescan, finds the next set of VBKs (but only from one source job) and starts writing them to tape.
Problem is some of these backup components have one one VM in them which leads to the backup job using only 1 or 2 drives and the others drives are "idle".
Is this a known 12.3 inefficiency or should we open a case?
Markus
I wonder if it still makes sense to open a 12.3 ticket or wait for 13 upgrade:
After switching to LTO10 we changed our tape scheduling so our GFS jobs start while the source jobs are still running (source job creates synthetics so the tape job can then just take them). According to documentation this should lead to the tape job waiting for the backup files appear on the source job.
This "works" so far but it behaves strangely/not good: We have 3 drives writing and even a day after all synthetics are available some job components still show "Some backup files are locked by the backup job, waiting for availability..." and "No backup files found". Then, after the currently writing backup job component finishes it does the rescan, finds the next set of VBKs (but only from one source job) and starts writing them to tape.
Problem is some of these backup components have one one VM in them which leads to the backup job using only 1 or 2 drives and the others drives are "idle".
Is this a known 12.3 inefficiency or should we open a case?
Markus
-
david.domask
- Veeam Software
- Posts: 3197
- Liked: 742 times
- Joined: Jun 28, 2016 12:12 pm
- Contact:
Re: 12.3 issue with rescanning for new backups in GFS jobs
Hi mkretzer,
> Problem is some of these backup components have one one VM in them which leads to the backup job using only 1 or 2 drives and the others drives are "idle".
Just to confirm, the issue here is that the 1-2 drives are locked by the Tape GFS job and not freed up for other jobs / operations? Or the issue is that there are backup files that should be available for the Tape GFS job to grab but it doesn't grab them?
I'm having a little trouble understanding the main complaint, but maybe will help to review the GFS scan process first and point out from that where you see a challenge:
https://helpcenter.veeam.com/docs/vbr/u ... tml?ver=13
In short, Tape GFS (and normal tape jobs) have the following retry behavior:
- Determine candidate backup files to be backed up
- Queue the tape backup task for the candidates
- Once the task starts, re-check that the previously found candidates are still the most valid, if not, requeue the task (UI message will be "source backup files changed")
- Try to do the backup, if there is an expected blocker that prevents backup (e.g., some other operation locks the backup files), requeue the task
- Let all tasks complete, regardless of success/failure/retry
- Check if any tasks require retry, retry those tasks going through the same process outlined above until all tasks are successful OR job must abort due to a failure
Drives should be freed up for other operations if the current job doesn't require it
So based on that, can you explain a bit more where the challenge arises?
>(source job creates synthetics so the tape job can then just take them).
Tangential, but is there a reason you prefer to grab full backups from the source job instead of letting the tape job use Virtual Fulls? Virtual Fulls remain the fastest way typically to get a full backup on tape with few exceptions, but maybe you fall into one of those exceptions. (Usually source repository performance)
> Problem is some of these backup components have one one VM in them which leads to the backup job using only 1 or 2 drives and the others drives are "idle".
Just to confirm, the issue here is that the 1-2 drives are locked by the Tape GFS job and not freed up for other jobs / operations? Or the issue is that there are backup files that should be available for the Tape GFS job to grab but it doesn't grab them?
I'm having a little trouble understanding the main complaint, but maybe will help to review the GFS scan process first and point out from that where you see a challenge:
https://helpcenter.veeam.com/docs/vbr/u ... tml?ver=13
In short, Tape GFS (and normal tape jobs) have the following retry behavior:
- Determine candidate backup files to be backed up
- Queue the tape backup task for the candidates
- Once the task starts, re-check that the previously found candidates are still the most valid, if not, requeue the task (UI message will be "source backup files changed")
- Try to do the backup, if there is an expected blocker that prevents backup (e.g., some other operation locks the backup files), requeue the task
- Let all tasks complete, regardless of success/failure/retry
- Check if any tasks require retry, retry those tasks going through the same process outlined above until all tasks are successful OR job must abort due to a failure
Drives should be freed up for other operations if the current job doesn't require it
So based on that, can you explain a bit more where the challenge arises?
>(source job creates synthetics so the tape job can then just take them).
Tangential, but is there a reason you prefer to grab full backups from the source job instead of letting the tape job use Virtual Fulls? Virtual Fulls remain the fastest way typically to get a full backup on tape with few exceptions, but maybe you fall into one of those exceptions. (Usually source repository performance)
David Domask | Product Management: Principal Analyst
-
mkretzer
- Veeam Legend
- Posts: 1323
- Liked: 475 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: 12.3 issue with rescanning for new backups in GFS jobs
Hello,
> Or the issue is that there are backup files that should be available for the Tape GFS job to grab but it doesn't grab them?
Yes, exactly. We have 1-2 drives idle all the time when there are only 1-2 VM in one source backup job. It seems to rescan sequentially.
There are no blockers, it is able to take the files anytime. It just does not do it and sits there with "No backup files found" while 1-2 drives are idle and one drive is alone writing a very big VM backup to tape.
If we schedule the job run *after* all backups are finished it seems to work alot better (will re-verify that next weekend). But then we loose several hours where the job could already start to write.
> Tangential, but is there a reason you prefer to grab full backups from the source job instead of letting the tape job use Virtual Fulls?
Taking VBKs is normally *slightly* faster but not by much anymore - also we see no downside as the the synthetics run on the tape backup start day anyway.
> Or the issue is that there are backup files that should be available for the Tape GFS job to grab but it doesn't grab them?
Yes, exactly. We have 1-2 drives idle all the time when there are only 1-2 VM in one source backup job. It seems to rescan sequentially.
There are no blockers, it is able to take the files anytime. It just does not do it and sits there with "No backup files found" while 1-2 drives are idle and one drive is alone writing a very big VM backup to tape.
If we schedule the job run *after* all backups are finished it seems to work alot better (will re-verify that next weekend). But then we loose several hours where the job could already start to write.
> Tangential, but is there a reason you prefer to grab full backups from the source job instead of letting the tape job use Virtual Fulls?
Taking VBKs is normally *slightly* faster but not by much anymore - also we see no downside as the the synthetics run on the tape backup start day anyway.
-
david.domask
- Veeam Software
- Posts: 3197
- Liked: 742 times
- Joined: Jun 28, 2016 12:12 pm
- Contact:
Re: 12.3 issue with rescanning for new backups in GFS jobs
Got it, thanks for the clarifications. I would ask if you could open a Support Case then and note which tasks in the job should have had files to grab that weren't detected -- it's possible that the Tape GFS job had not yet requeued tasks, but a review of the debug logs will be able to explain it.
When exporting logs, use the 1st radio button to export from jobs, and ctrl + click to select the Tape GFS job and the respective source jobs that had backup files which weren't grabbed by the Tape GFS job. It will help Support narrow down the issue more quickly.
When exporting logs, use the 1st radio button to export from jobs, and ctrl + click to select the Tape GFS job and the respective source jobs that had backup files which weren't grabbed by the Tape GFS job. It will help Support narrow down the issue more quickly.
David Domask | Product Management: Principal Analyst
-
mkretzer
- Veeam Legend
- Posts: 1323
- Liked: 475 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: 12.3 issue with rescanning for new backups in GFS jobs
So it still makes sense for 12.3?
-
david.domask
- Veeam Software
- Posts: 3197
- Liked: 742 times
- Joined: Jun 28, 2016 12:12 pm
- Contact:
Re: 12.3 issue with rescanning for new backups in GFS jobs
Correct, let's take a look and make sure it's understood why there's a delay in detecting / grabbing some of the files, as based on your report sounds like it should be able to detect these new files.
David Domask | Product Management: Principal Analyst
-
mkretzer
- Veeam Legend
- Posts: 1323
- Liked: 475 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: 12.3 issue with rescanning for new backups in GFS jobs
Done, Case 07921867
Last edited by david.domask on Dec 15, 2025 2:32 pm, edited 1 time in total.
Reason: Replaced Support ID with Case ID
Reason: Replaced Support ID with Case ID
-
mkretzer
- Veeam Legend
- Posts: 1323
- Liked: 475 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: 12.3 issue with rescanning for new backups in GFS jobs
Ok according to support this is currently working as designed and might be "fixed" later 
-
david.domask
- Veeam Software
- Posts: 3197
- Liked: 742 times
- Joined: Jun 28, 2016 12:12 pm
- Contact:
Re: 12.3 issue with rescanning for new backups in GFS jobs
Thanks for the follow up, and yes, I was a bit surprised to hear this as well and is related to how the tasks for added jobs with per-vm chains are processed.
Thanks for bringing this to our attention and glad we have an answer at least.
> might be "fixed" later
This is something we intend to correct; if Support had written "might", I believe the intention was to tell that there is no current estimate on when changes will be implemented. I will update this topic when there's more information.
Thanks for bringing this to our attention and glad we have an answer at least.
> might be "fixed" later
This is something we intend to correct; if Support had written "might", I believe the intention was to tell that there is no current estimate on when changes will be implemented. I will update this topic when there's more information.
David Domask | Product Management: Principal Analyst
-
mkretzer
- Veeam Legend
- Posts: 1323
- Liked: 475 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: 12.3 issue with rescanning for new backups in GFS jobs
Nice - for now we sadly have no real workaround because we still want to have our source job share media AND be exported to the export slots in the end. Which will not work (at least not out of the box).
Who is online
Users browsing this forum: Amazon [Bot], Google [Bot] and 15 guests