
I know that Veeam v10 can do 2 things:
1. Move inactive chains to object storage
2. Copy active chains to object storage
My initial thoughts were we can't "move" active chains because of API overhead to update the full backup file in S3 storage during something like a merge operation, but doesn't Veeam just update metadata pointers? Also, if we can "copy" a chain up to object storage, then we would be battling the same issue, because copied chains with active VBKs can always be changing, so that wouldn't make sense either.
So my question is: Why can't we "move" an active chain? It seems like there's little functional difference in updating a "copied" chain vs a moved one that just doesn't exist on prem. I guess instead of merging vbk changes and then copying, Veeam would have to make changes directly without doing on-prem first, which doesn't seem like a big issue, but what am I missing? If it was easy, surely Veeam would have allowed that, right?
Here's the use case:
The customer has a daily backup job, plus another backup job that runs once a week that acts as an archival weekly snapshot of the servers. They have one full backup and 51 increments every year. What they wanted to do was have that whole chain in object storage, so that every week when a new incremental is taken, it gets sent up to S3 right after the job and lives with the rest of the chain, therefore using very little on prem storage. The customer has explained that they don't have enough space on their hardware for the normal backup chain, the archival chain, and then a second archival full backup that would need to be taken that would be needed to seal the first chain so that it can be deleted from on prem.
Here's what I feel like you guys are going to tell me to do now:
Use the new GFS retention scheme for normal backups, put that on a SOBR linked to object storage, and then tell that to move the sealed GFS chains up to S3 after creation so that I only have 1-2 full backups on disk, and not 2-3. Is that accurate, or is there a better way?
Is there anything I'm missing? Is there a better way to configure this?
Bonus questions:
Is there a hack or workaround to force the active chain to "move" to S3?
Is this a potential change coming in vNext, or is this the functionality that I need to be happy with long-term?

Can I make the current chain inactive just by a manual right click on the job > active full, or does it need to be some sort of scheduled operation? Not sure why it would matter, but I know there are a few things Veeam treats differently in scheduled vs non-scheduled jobs.
Veeam only moves unique blocks up to S3, right? So if we have a 10TB backup in S3, and our next sealed chain is 10TB with only 1TB of unique data, only 1TB will be uploaded - is that correct?
If we decided that we were done with the old way of doing things with our separate archival backup job and went to GFS, what would be the proper Veeam-supported way to seal it off and have it deleted from on-prem? Should I take all VMs out of the job except one tiny one and do an active full of the tiny VM so that Veeam offloads the previous chain into S3, then manually delete the most recent full restore point of the random VM and disable / delete the job?
Since the chain is already "copied" when the "move" retention kicks in, Veeam is essentially just going to update the local metadata and delete the data blocks in all of the backup files on-prem, which should be a quick operation, right?
Whoever decides to tackle this post and answer these questions will be awarded a gold star and my everlasting gratitude.
PS - this is where I got my info, let me know if I missed something:
https://helpcenter.veeam.com/docs/backu ... ml?ver=100
https://helpcenter.veeam.com/docs/backu ... ml?ver=100