Agentless, cloud-native backup for Amazon Web Services (AWS)
Post Reply
kyw
Novice
Posts: 3
Liked: never
Joined: Oct 19, 2022 3:33 am
Full Name: kyw
Contact:

high PUT, COPY, POST, or LIST

Post by kyw »

I found VBA backup snapshots to S3 caused too much PUT, COPY, POST, or LIST charge, my 3TB EBS was backup as 3000000 x 1MB files at S3 and caused 3000 x USD 0.005 = USD15 per backup.

Is there any reason the backup needs to break the EBS as 1MB small parts? Possible to make it 10MB or 100MB to reduce the PUT, COPY, POST, or LIST charge?
jorgedlcruz
Veeam Software
Posts: 1372
Liked: 619 times
Joined: Jul 17, 2015 6:54 pm
Full Name: Jorge de la Cruz
Contact:

Re: high PUT, COPY, POST, or LIST

Post by jorgedlcruz »

Hello,
I would not call that high, but expected. First backup should be like that, the rest of the points will take just incremental changes thanks to CBT, etc.

Is the cost of the backup not aligned to the policy cost estimation? Should be similar so you can always be informed before you run the policy.

The 1MB block is recommended so as said moving forward when you trigger backups, less objects will need to be moved, same as restores, etc.

I am not aware of any way to change this on VBs, with Veeam v12 you will have the chance to do Agent Backup for Cloud Instances, and send them direct backup to S3, it is soon to tell if cost will be less, but certainly some more compression/data reduction policies would be possible.
Jorge de la Cruz
Senior Product Manager | Veeam ONE @ Veeam Software

@jorgedlcruz
https://www.jorgedelacruz.es / https://jorgedelacruz.uk
vExpert 2014-2024 / InfluxAce / Grafana Champion
nielsengelen
Product Manager
Posts: 5636
Liked: 1181 times
Joined: Jul 15, 2013 11:09 am
Full Name: Niels Engelen
Contact:

Re: high PUT, COPY, POST, or LIST

Post by nielsengelen »

This is only for the first full backup. Additional backups will be incremental and thus less in size and costs. The policy’s cost calculator will inform you about this. Using 1MB blocksize is the most cost efficient. When you enable archiving to Glacier, a larger block size is used due to the fact that, most of the time, you won’t recover often from Glacier.

There won’t be much difference compared to using VB for AWS or an agent in v12 as they use the same technology.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
Post Reply

Who is online

Users browsing this forum: No registered users and 4 guests