I have 10 VMWare backup jobs to HPE Catalyst Shares off StoreOnce as source of varying VM count & size each, and each has its respective Backup-to-Tape job set to continuous as backup files appear. All backing up to a single tape server/media pool 5 x LTO7.
Since I'm backing up to Catalyst share, the backup to tape jobs do not begin until the source job is complete and idle.
For whatever reason a specific tape drive will intermittently say that a Drive is not ready, but instead of moving to the next drive it will continue down the list of servers and fail each one. The backup to tape job fails, and it re-tries the job a few minutes later with the same results. This morning I woke up to 100's of failure emails each separated about 3 minutes apart with the same error for the same tape job.

In dealing with this problem for some time now very randomly/intermittently, but common enough to where it has lowered quality of life always needing to be within range of a comptuer to login in and babysit these jobs. I need to be able to login and preform my workaround, as I've found a one...A very simple one.
I just disable the effected tape drive in Veeam. And then Veeam will move on to the next tape drive within seconds even in an active job and begin moving data to tape.

Once the drive is disabled, and the job continues to move data, I can then Re-enable the disabled tape drive and the job will use it with no issue.
This of course, may happen again randomly when the source jobs runs on the next interval and complete when the continuous backup-to-tape jobs starts and begins enumerating detected backup files.
I will open a new case, I believe the one i've opened in the past support was trying to get me to look at my windows device setting on my tape server (Which are all fine, enabled, active), which I admit I probably closed in frustration as I've troubleshoot hardware already, worked with vendor on the tape issue, narrowing down the issue between hardware and software is tough...
The tape drive(s) is completely idle, no cleanings are happening, no other apps are trying to use the drive. My only thought is that another tape job is planning on using the drive (even though it's not locked/or being actively used?)
Any potential problems (if anyone an find one) in my environment aside, this should be considered a feature request as well. I do not see why Veeam cannot simply try another drive (when this happens there are always drives available and not in use). The media pool settings has some failover options to try another library if a tape library is offline, I just wish if the tape job has some issue with the drive why can it not simply try another drive?
Am considering writing some scripts to automate this process of disabling an "in use drive", since disabling it for a few minutes, will allow the job to continue.
Once the job progresses, I enable the drive and it is free for the existing job (or any job) to freely use with no issue. That is until the next source backup occurs and completes, then tape jobs may randomly have this problem.
Case #03401343 was created.