Discussions related to using object storage as a backup target.
Post Reply
jcofin13
Service Provider
Posts: 72
Liked: 1 time
Joined: Feb 01, 2016 10:09 pm
Contact:

AWS costs

Post by jcofin13 »

our aws cost is getting a bit out of control. Im wondering if there is anything that can be done to cheapen it up at all. I assume its due to all the transactional charges they get you with. I am still investigating.
I notice on the object storage settings under "bucket" we have "use infrequent storage class" checked which it says "may result in higher costs".

Can this be shut off now that there are tons of backups on the job(s) and does that start a new chain where our costs would go up if we did turn it off or something strange like that?
Gostev
Chief Product Officer
Posts: 31638
Liked: 6793 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: AWS costs

Post by Gostev »

Yeah, AWS IA storage does come with significantly higher API cost and is best used for corner cases like the Move policy of GFS backups (and also no immutability, for immutable archives you can't beat Glacier).

I'm not 100% sure what happens if you uncheck this setting but potentially it could work smoothly with new objects created with a new storage class, and old objects with IA class slowly deleted by retention. But I need the responsible PM to confirm.

Moving this to thread to the dedicated forum.
jcofin13
Service Provider
Posts: 72
Liked: 1 time
Joined: Feb 01, 2016 10:09 pm
Contact:

Re: AWS costs

Post by jcofin13 »

Thanks. Yeah our s3 bucket is immutable as well of course and a lengthy gfs setup as well.
I cant say why IA was turned on in this object storage but its always been on and likely since we started using the product. I dont have the history here as i inherited it after the fact and others have moved on so i cant ask them.
Gostev
Chief Product Officer
Posts: 31638
Liked: 6793 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: AWS costs

Post by Gostev » 1 person likes this post

Just in case you meant you turned on immutability at the bucket level and configured retention there, then this is explicitly not supported by Veeam (see the first Amazon S3 Immutability Limitation).
jcofin13
Service Provider
Posts: 72
Liked: 1 time
Joined: Feb 01, 2016 10:09 pm
Contact:

Re: AWS costs

Post by jcofin13 »

Object lock is on for the bucket....not any specific "s3 immutability" if that is something different. Sorry i should have been more clear.
jcofin13
Service Provider
Posts: 72
Liked: 1 time
Joined: Feb 01, 2016 10:09 pm
Contact:

Re: AWS costs

Post by jcofin13 »

Im still a bit confused by using IA with S3. I dont imagine shutting off this IA setting will cut down on the API calls. Those calls will likely just hit the standard tier.

The setting does say "may result in higher costs" but suggests this is based on early deletions, short term storage. Right now we are set such that our operational restore window is 30+ days which i assume controls this and anything less than this window is considered short term storage. Is that correct?

I did open a support case asking about this checkbox and turning it back off but support suggested having this on is more to save money and should cost less by having it on. They were not sure what exactly happens if we were turn it off. I suspect all the api calls turn will then go against the standard storage tier driving costs up more. It seems impossible to know.

Ultimately switching providers would be an option but thats also confusing. I have a bunch of questions if we decide to go that route such as can you have 2 different object storage providers in your SOBR if you seal the old one and wait for it to age out? I dont think you can. We cant really pull 300tb back on prem and then push it back to a new provider so thats not an option. The fee would be huge and we dont have that kind of storage locally. Is there a way to seal the current object storage....then move/remove your local sobr extents to a new SOBR with a new providers object storage? This seems plausable but im not sure you can remove local performance extents and add them to a new sobr and retain the data properly. Anyway.....challenges for another time i guess.
Gostev
Chief Product Officer
Posts: 31638
Liked: 6793 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: AWS costs

Post by Gostev » 1 person likes this post

As a rule of thumb, the cheaper Amazon S3 storage class is to store data, the more expensive its API costs are (calls like write/read/update). And these fees are tuned to more or less "compensate" each other, with API costs growing faster than storage costs reduce.

As a result, for most scenarios it does not make economical sense to use anything but the two extremes for Veeam: regular S3 storage (as a regular backup target for daily backups) or Glacier (as an archive location for GFS backups) where we repackage backup data into very big objects before writing to Glacier, thus addressing its very high API costs.

Have you considered using SOBR Archive Tier to offload older backups to Glacier? Or you don't have a very long retention anyway and it's basically all daily backups for a few weeks? In the latter case, using regular S3 storage class may actually be cheaper for you.
antonk
Novice
Posts: 7
Liked: 1 time
Joined: Jun 01, 2017 3:40 pm
Full Name: Anton Kobrinsky
Contact:

Re: AWS costs

Post by antonk » 1 person likes this post

We have same issue with double and even triple costs rising on S3 repository.
As it turned out, that's because immutability policy and excessive cost of operations such as "put" and "get". There is a Veeam KB 4470 which describes this scenario (was provided me by support), which have helped to resolve the issue.
https://www.veeam.com/kb4470
william.scholes
Service Provider
Posts: 11
Liked: 6 times
Joined: Nov 24, 2020 2:30 am
Full Name: William Scholes
Contact:

Re: AWS costs

Post by william.scholes »

Having used Commvault to store data in S3, I know that the tier can be set operationally by the process storing the data, in this case would it not be preferable to clearly state this in the Veeam GUI, and provide the option to set the tier by the job type?
robg
Expert
Posts: 172
Liked: 18 times
Joined: Aug 15, 2014 11:21 am
Full Name: Rob
Contact:

Re: AWS costs

Post by robg » 1 person likes this post

You can reduce your AWS costs by going back to co-location at a datacenter.
rciscon
Influencer
Posts: 21
Liked: 3 times
Joined: Dec 14, 2010 8:48 pm
Full Name: Raymond Ciscon
Contact:

Re: AWS costs

Post by rciscon » 1 person likes this post

As previously posted, high density local storage is a commodity and huge amounts of storage are possible at prices far less than what your costs going to AWS or Azure would be.

Talk to your trusted vendor for information on bringing your storage needs back on site, or to a co-loc.
RubinCompServ
Service Provider
Posts: 278
Liked: 69 times
Joined: Mar 16, 2015 4:00 pm
Full Name: David Rubin
Contact:

Re: AWS costs

Post by RubinCompServ »

Gostev wrote: May 10, 2024 2:48 pm Have you considered using SOBR Archive Tier to offload older backups to Glacier? Or you don't have a very long retention anyway and it's basically all daily backups for a few weeks? In the latter case, using regular S3 storage class may actually be cheaper for you.
Don't you need to be using AWS for Capacity Tier in order to use Glacier for Archive Tier?
Gostev
Chief Product Officer
Posts: 31638
Liked: 6793 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: AWS costs

Post by Gostev »

That is correct.
Post Reply

Who is online

Users browsing this forum: No registered users and 11 guests