Ok, so it has taken a bit of time for me to get around to writing this but now I have the information I need to know what is going on.
First, Update 1 fixed the issue with the job failing when a backup copy job merge began: Excellent.
But I also ran into something which caused me chagrin:
When a backup copy job is the source for a GFS tape job, the tape job waits until the end of the current copy interval to process the file. So in our case, as our copy jobs are on a 7 day interval, means that the tape job takes up to 1 week to run. Not fun for a job that is intended to make an archival copy of the latest state of everything at a point in time. I figured, well, ok I can just run the tape on the first Sunday of the month which will take a couple of days but will finish with all of the copies of the state nearest to the end of the month which while not ideal would be acceptable.
But, I discovered this morning looking at the status of the monthly GFS job another thing that makes me confused:
Even though it holds off on sending the data on tape until the current copy interval is completed, it doesn't take the restore point that it is "waiting for". As soon as the copy interval starts, it takes the restore point that was present at the time the GFS tape job began and copys it to tape then shows that source as completed successfully. In my case, that is a restore point from a week ago which further makes it harder to get an "end of month" tape for all of our critical VMs.
Why, if it is just going to use the previous restore point already on disk, does the tape job need to wait until the current copy interval is finished?
When I heard about GFS tape jobs I thought all of my tape scheduling headaches would go away but it seems like it has actually made it worse.
Veeam Certified Architect