Discussions related to using object storage as a backup target.
Post Reply
hubert.mroz
Novice
Posts: 5
Liked: never
Joined: Apr 27, 2020 2:32 pm
Full Name: Hubert Mroz
Contact:

Copy to AWS S3

Post by hubert.mroz »

Hello,

Currently running Veeam v11a on standalone server with storage attached over iscsi from qnap over mutliple 1Gpbs nic with utilized storage ~45-50TB of 100TB possible, the biggest chunk of it is holded by local FS server gathering ~30TB.

The main goal was to implement another repo to perform copy of existing backups, but after a talk and look of prices I'm considering 2 scenarios:
1) 1 S3 bucket with copy job with stricte holding it for 2x as much time
2) 1 S3 Glacier bucket with copy job with holding onto a few monthly backups due to required 180 days of retention ( this requires 1 s3 and 1s3 glacier and ec2 to push it )

After reading a few posts I've come to conclusion that API is calculated straight forward with 1API PUT/GET = 1MB transfer so this is understandable.
Now I'm intrested of how 'forever incremental' which is being utilized for those S3 off-load and copy jobs is calculated.

Ex: 1 FS with 14 retention points doing full backup every month, rest being incrementals. ( yes, this config concludes that more retention points are created due to full running only every 30 days, but I'm not trying to fix it now, but will have to )
1)
1st day the backup is created which takes ~90h. After that it would be copied to S3, which would take another ~8days ( based on a few file upload test when manually moving files to S3 with vbr. file ~50gb )
Full backup would take 19TB and increase with each next day for about 100GB ( the daily diffs are 10-100GB, top piks seen being 300 or 500GB )
19TB = 19 million API calls and each day another 100 000 API calls daily to transfer incremental backup, which 'If the weather is good' should be done within 24h day cycle.
Now fast forward 30 days in future another full backup is being done. The veeam performs another full backup to on-premise (performance tier) and copy it to s3 capacity tier. Will it perform yet another full copy or will it calculate diff blocks between what's currently in capacity tier? how much space would it take in S3 after another full in this case?
2)
as an addition the jobs are holded for longer, but does transfering S3 to S3 Glacier with for example 0 day storage policy (as configured in veeam vbr ) so that backups are moved from regular S3 to Glacier - perform yet another API calls to move them within cloud? So if 19TB is being pushed, the calculated cost of storage is same, but API cost is 2x, same applied for incremental?

As a bonus question: is it possible to apply separate in SOBR to select which machine should utilize capacity/archive tier so that while we test it we dont push all 50TB? I've seen a post that I could create 2 sd SOBR's and just don't include this S3 storage, but when trying to configure 1 more I can't same performance tier twice?

Sorry for a long post, but I'm trying to give all the info needed :)
Regards,
Hubert
---
Regards,
Hubert Mroz
HannesK
Product Manager
Posts: 14844
Liked: 3086 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Copy to AWS S3

Post by HannesK »

Hello,
1) please check the sticky FAQ ;-)

2) yes, there are GET API costs for reading the small objects from S3 before converting them to large objects for Glacier. 2x sounds good.

bonus: not today, but in V12 you can write to object storage directly. that means, you could use backup copy jobs on "per job" level.

Best regards,
Hannes
hubert.mroz
Novice
Posts: 5
Liked: never
Joined: Apr 27, 2020 2:32 pm
Full Name: Hubert Mroz
Contact:

Re: Copy to AWS S3

Post by hubert.mroz »

Hello,

Thanks for info and sorry for late reply.
1) this FAQ defenitely resolves most of my questions, but wanted to ask a few more things.
since backup is forever incremental the s3 backups are storage wise duplicated on the side of cloud (so the next full backup takes same space+incremental, but it's getting data from previous backup)
does this operation also consume any API calls (as of GET API from S3 and PUT API for S3 Glacier)?
2) how much space does capacity tier need in case of doing 0 day policy keep on it and moving it to archive tier? is it requiring storage equal to full backup for a time of move? does it automatically free this space or i have to reclaim it at the side of s3?
---
Regards,
Hubert Mroz
HannesK
Product Manager
Posts: 14844
Liked: 3086 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Copy to AWS S3

Post by HannesK »

Hello,
1) hmm, not sure I got it 100%. the "virtual full" does not cost API calls in S3. Glacier only stores GFS (weekly / monthly / yearly) backups. So that causes API calls (PUT & GET).

2) Only GFS restore points can be moved to Glacier. Anything else would cause more costs, so the software does not allow it. What is your retention policy?

Best regards,
Hannes
hubert.mroz
Novice
Posts: 5
Liked: never
Joined: Apr 27, 2020 2:32 pm
Full Name: Hubert Mroz
Contact:

Re: Copy to AWS S3

Post by hubert.mroz »

Hello,

Hope you all are having a great new year.
My mistake, didn't read that I need GFS enabled for 3S Glacier.
Desired idea I had was to set 1 more monthly copy and move this copy to reside on S3 Glacier only.

So how can I do this for 2 scenarios that I have currently:
1 scenario: Daily backup, 14RPO with Active Full Backups Monthly Every Friday (The RPO's are definitely mistook by my colleagues, cause there is no way to keep 14 RPO and perform Active Full Monthly.)
https://pasteboard.co/oFnCOs9Mc20H.png

2 scenario: Daily backup, 14RPO with Synthetic Backup weekly at saturday
https://pasteboard.co/E28lkEimMrGi.png

Regards, HM
---
Regards,
Hubert Mroz
HannesK
Product Manager
Posts: 14844
Liked: 3086 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Copy to AWS S3

Post by HannesK »

Hello,
in both scenarios, the backup chains are so short, that I would not use glacier. that's too expensive because the monthly (weekly) is deleted too fast. you can, but be aware of the costs...
Desired idea I had was to set 1 more monthly copy and move this copy to reside on S3 Glacier only.
one copy in glacier is actually possible, but again, with the retention from the post, it's waste of money. I described it here. One GFS will be available in capacity tier and archive tier.

If you like two copies in the cloud, I suggest to wait until V12 (probably second half of 2022) and create a backup copy job directly to another cloud provider (or bucket).

Best regards,
Hannes
Post Reply

Who is online

Users browsing this forum: Luiz E. Serrano and 5 guests