I have almost the exact same problem, but using a GFS tape job. In addition the tape jobs that gets stopped does not clean up after itself.
I have a normal backup job that backs up several fileservers, and then a linked GFS tape job that starts at midnight every week (not possible to change start hour as far as I can tell).
As I only have LTO5, the backup to tape would take more than 24 hours, so the tape job would always get stopped by the backup job. As far as I can tell it is not possible to implement wait times for the GFS tape jobs.
After the tape job has been running from midnight until 7PM, it gets stopped when the backup to disk job starts. It has then used up 3-4 tapes when it failes. As far as the Veeam GUI shows, there are no backups on these tapes, but all space has been used. If you catalog the tapes, you can see that there is a partial backup on the tapes.
I have made a support case for this (01756660), and uploaded logs and pictures to a FTP. The respons from support was:
Unfortunately, it is behavior by design now since backup job has higher priority than backup to tape and if backup job should run at the time when backup to tape is in progress, backup to tape will fail.
It is not considered as bug, but you can request a feature.
Feature request 1:
Veeam Backup and Replication should be able to clean up after itself when a tape job is stopped mid run, and return the tapes to the free pool. Partial full backups are useless.
Feature request 2:
I want a way to be able to take GFS tape backups in the GUI of large backup jobs without resorting to scripting. As it is now my GFS tape job would fail once a week, filling up 3-4 tapes, and doing this until my library ran out of tapes.
Feature request 3:
It should be possible to select when in a day the GFS tape job should run. Now all of them starts at 00:00.
Feature request 4:
It should be possible to implement wait times for GFS tape jobs