Hello there,
We came across an (in our case) error, in which the Software is stopping the backup even though there is enough space left on the tapeset.
The actual case: there is one last job (done to 65%) which is about 550GB in size. The summary tells me "Timed out waiting for tape in HP MSL G3 Series"
We have two libraries with overall 7 tapes per actual running mediaset. In this particular case we have in the library#1 one tape (Ultrium 5) with a rest capacity of 952GB space left.
while this last job was running, some other jobs seems to be running also which are "blocking" the libraries by writing their data to the mediaset.
The thing is: Obviously one job does wait only ONCE for positive library response. If nothing comes within a not definable time window, the job quits working and ask for a tape of a very different mediaset, which is not actually available inside the libraries. Instead of asking the libraries every now and then, just nothing happens. It will run always in the 72hrs global time out and report an error.
It would be fabulous, if there were a way to tell those wall facing jobs to ask the libraries again (because now, the tape with the free space is ready to use, but the stubborn job denies to ask) and finish its task.
I have had already some tickets with the support and none had a helpful conclusion.
Please give me a hint, what will help on this stage. Tomorrow the job will fail. So i have some time to test a solution.
Please help
Patrick
==============================
corresponding Support-Cases:
05260708 // 05265353 // 05306270 // 05383765 // 05383807
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Sep 06, 2022 8:07 am
- Full Name: Patrick Hackert
- Location: Germany
- Contact:
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Tapes with free space are not used.
Hello,
and welcome to the forums.
I checked the cases and some of them they seem unrelated to the question here (e.g. 05383807 is about Exchange as far as I see). I guess it's about case 05383765?
I would like to ask the question the other way around: what leads to the situation with the locked drives? I'm guessing, that many tape jobs were created for a small number of machines / drives. So my guess would be, that re-designing the tape jobs would be the best solution. In general, there seems to be a performance issue, if the tape job is running into a 72h timeout. That's something I would also look at.
Best regards,
Hannes
and welcome to the forums.
I checked the cases and some of them they seem unrelated to the question here (e.g. 05383807 is about Exchange as far as I see). I guess it's about case 05383765?
I would like to ask the question the other way around: what leads to the situation with the locked drives? I'm guessing, that many tape jobs were created for a small number of machines / drives. So my guess would be, that re-designing the tape jobs would be the best solution. In general, there seems to be a performance issue, if the tape job is running into a 72h timeout. That's something I would also look at.
Best regards,
Hannes
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Sep 06, 2022 8:07 am
- Full Name: Patrick Hackert
- Location: Germany
- Contact:
Re: Tapes with free space are not used.
Good Morning Hannes,
well even though this case was containing exchange data, it was about the tape backup. This particular case was about the fact, that veeam is backpacking not fully ran backups to the next backup-run, without telling anyone, that now the doubled Size is needed for the backup because of the previous added payload.
And only a disable and enable of the corresponding job is killing this additional payload.
but back to topic.
My "normal" steps every single week are as follows:
1. Our customer is replacing the last media-set with the next Media set of 7 tapes.
2. I mark the newly inserted tapes after an inventory run as free
3. I put the now as free marked tapes to the correct media set inside the gfs pool
4. I mark all 5 Tape-Jobs as disabled, wait for 2-5 Minutes and enable them again, to delete eventual old added payload from those.
Then I wait for the weekend and hope the best, that the Backup is running w/o errors. And that is definitely not always the case.
[EDIT]
Additional information: there are 2 Tape libraries and each one have one LTO5 drive.
[/EDIT]
well even though this case was containing exchange data, it was about the tape backup. This particular case was about the fact, that veeam is backpacking not fully ran backups to the next backup-run, without telling anyone, that now the doubled Size is needed for the backup because of the previous added payload.
And only a disable and enable of the corresponding job is killing this additional payload.
but back to topic.
My "normal" steps every single week are as follows:
1. Our customer is replacing the last media-set with the next Media set of 7 tapes.
2. I mark the newly inserted tapes after an inventory run as free
3. I put the now as free marked tapes to the correct media set inside the gfs pool
4. I mark all 5 Tape-Jobs as disabled, wait for 2-5 Minutes and enable them again, to delete eventual old added payload from those.
Then I wait for the weekend and hope the best, that the Backup is running w/o errors. And that is definitely not always the case.
[EDIT]
Additional information: there are 2 Tape libraries and each one have one LTO5 drive.
[/EDIT]
Who is online
Users browsing this forum: Google [Bot] and 7 guests