Currently running Veeam v11a on standalone server with storage attached over iscsi from qnap over mutliple 1Gpbs nic with utilized storage ~45-50TB of 100TB possible, the biggest chunk of it is holded by local FS server gathering ~30TB.
The main goal was to implement another repo to perform copy of existing backups, but after a talk and look of prices I'm considering 2 scenarios:
1) 1 S3 bucket with copy job with stricte holding it for 2x as much time
2) 1 S3 Glacier bucket with copy job with holding onto a few monthly backups due to required 180 days of retention ( this requires 1 s3 and 1s3 glacier and ec2 to push it )
After reading a few posts I've come to conclusion that API is calculated straight forward with 1API PUT/GET = 1MB transfer so this is understandable.
Now I'm intrested of how 'forever incremental' which is being utilized for those S3 off-load and copy jobs is calculated.
Ex: 1 FS with 14 retention points doing full backup every month, rest being incrementals. ( yes, this config concludes that more retention points are created due to full running only every 30 days, but I'm not trying to fix it now, but will have to )
1)
1st day the backup is created which takes ~90h. After that it would be copied to S3, which would take another ~8days ( based on a few file upload test when manually moving files to S3 with vbr. file ~50gb )
Full backup would take 19TB and increase with each next day for about 100GB ( the daily diffs are 10-100GB, top piks seen being 300 or 500GB )
19TB = 19 million API calls and each day another 100 000 API calls daily to transfer incremental backup, which 'If the weather is good' should be done within 24h day cycle.
Now fast forward 30 days in future another full backup is being done. The veeam performs another full backup to on-premise (performance tier) and copy it to s3 capacity tier. Will it perform yet another full copy or will it calculate diff blocks between what's currently in capacity tier? how much space would it take in S3 after another full in this case?
2)
as an addition the jobs are holded for longer, but does transfering S3 to S3 Glacier with for example 0 day storage policy (as configured in veeam vbr ) so that backups are moved from regular S3 to Glacier - perform yet another API calls to move them within cloud? So if 19TB is being pushed, the calculated cost of storage is same, but API cost is 2x, same applied for incremental?
As a bonus question: is it possible to apply separate in SOBR to select which machine should utilize capacity/archive tier so that while we test it we dont push all 50TB? I've seen a post that I could create 2 sd SOBR's and just don't include this S3 storage, but when trying to configure 1 more I can't same performance tier twice?
Sorry for a long post, but I'm trying to give all the info needed

Regards,
Hubert