I have 2 sites. Each with a StoreOnce 4500 and a Disk Repository Servers (Landing Zone - LZ). I have only 1 Tape Library. I am trying to utilize the HPE Catalyst Copy function to limit the amount of bandwidth used to send data between my 2 sites and trying to reduce my overall source copies to 1 per a source.
I am looking to create duplicates of both sites across my StoreOnces, as well as on my Site A Landing Zone to allow for faster Backup to Tapes. Ideally I'd only have one backup per a source and then let immediate backup copy jobs push the source data around from the LZs to the StoreOnces for deduplication and then the HPE Catalyst job would take that data and push it to the other StoreOnce and then also from LZ2 to LZ1 to allow for the un-deduped data to be picked up by the tape job.
Since the HPE Catalyst Copy job only works from a Backup Job, I attempted that, but having the Source/Targets set to a single Catalyst Store on each StoreOnce caused a looping of the backup and I'd end up with over 1000 backup copy jobs where it was trying to push the backups back and forth between the 2 StoreOnces. Not ideal.
I opened a ticket with support and worked through several strategies with them.
Currently the Strategy I'm testing:
Pros: It gets me replication of all my data across my StoreOnces. It should reduce my overall bandwidth used between sites since the StoreOnce to StoreOnce traffic will be leveraging the HPE Catalyst Copy
Cons: I am having to create 2 source backup jobs which will tax my schedule/resource availability and my data will be spread across multiple Catalyst Stores and thus not getting the best possible dedupe rate than if it was all shared in a single Catalyst Store in each StoreOnce.

My Feature Requests:
Allow for the HPE Catalyst Copy job to recognize Backup Copy Jobs that hit a StoreOnce Catalyst Store and allow for the HPE Catalyst Copy jobs to be shared within a Single Catalyst Store (stop the looping) so that the deduplication values stay higher than having multiple stores with similar data.
How I'd like it to work:

Originally how I was getting things done, but was running across scheduling/resource conflicts whenever Full jobs would run a job was sure to fail due to waiting for resources.

Another Strategy - using a Single Store and treating the StoreOnces as a the Original and the other as the Replicated Copy - this prevents looping but requires 2 backups jobs for every source. Also, in the event of a connection drop between sites, it will reduce my redundancy of backups because the StoreOnce copy job will fail and all I will have it the local LZ backup job.

If anyone has a different strategy to suggest, It'd be most appreciated.