Discussions related to using object storage as a backup target.
Post Reply
ctg49
Enthusiast
Posts: 65
Liked: 45 times
Joined: Feb 14, 2018 1:47 pm
Full Name: Chris Garlington
Contact:

Wasabi bucket limitations

Post by ctg49 »

Got an email on Friday that Wasabi is updating their service to impose an object limit on S3 buckets, of 100 million objects. As of right now, our Wasabi bucket has ~320M objects in it, placing it well over their new limit. We have what I assumed to be a relatively small S3 presence of about 100TB, around 1/5th of which is deleted data. Has anyone else run into a similar limit with Wasabi or another provider? Is 320M objects within the realm of normalcy for a VEEAM deployment at this level, or is something strange happening?

Is there some kind of resolution to this? We can only have one SOBR due to being on Enterprise, not Enterprise Plus, and I wouldn't want to do a separate breakout anyhow as it just adds unneeded complications. I don't see a way of adding multiple capacity tiers to a SOBR either, which makes sense.

Case opened, #03664429
veremin
Product Manager
Posts: 20271
Liked: 2252 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Wasabi bucket limitations

Post by veremin »

Are all these files originated from Veeam? What storage optimization level is set for your backup and backup copy jobs? LAN or WAN target by any chance? Thanks!
ctg49
Enthusiast
Posts: 65
Liked: 45 times
Joined: Feb 14, 2018 1:47 pm
Full Name: Chris Garlington
Contact:

Re: Wasabi bucket limitations

Post by ctg49 »

Yep, this bucket is purely VEEAM's.

The jobs who's backups are now sourced in the S3 bucket originated on deduped backup storage, so they were configured with no compression, WAN target (which iirc is recommended for deduped volumes). Since shifting to a 'local refs -> offsite s3' model, we've changed the jobs to extreme compression, local target (not large blocks) model. This covers a year or more of backups in most cases so it'll be a bit before the old data 'ages out'.

Per my case information, it might be possible to increase the size of the objects in VEEAM/Wasabi (and thus reduce the quantity counts) assuming they have a direct correlation between object in -> object stored. Is there a mapping/chart somewhere that can tell me what the difference in block sizes are between the five compression levels and the four storage optimization levels? Having this information available somewhere would help a lot when doing right-sizing on storage devices, and when troubleshooting things with vendors like Wasabi.

Thanks!
backupquestions
Expert
Posts: 186
Liked: 21 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Wasabi bucket limitations

Post by backupquestions »

Could you post what Veeam recommendation was in this case? I went with Wasabi and now fear I'll run into same issue.
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Wasabi bucket limitations

Post by Gostev »

ctg49 wrote: Jul 15, 2019 12:56 pmWe have what I assumed to be a relatively small S3 presence of about 100TB, around 1/5th of which is deleted data.
Can you clarify what exactly do you mean by "deleted data"?
ctg49 wrote: Jul 15, 2019 12:56 pmHas anyone else run into a similar limit with Wasabi or another provider?
This is certainly unique to Wasabi - and I find it really strange to be honest, because it goes against one of the main promises of the public cloud: infinite scale. I will check with my Wasabi contacts directly to see what's up with this change.
ctg49 wrote: Jul 15, 2019 6:56 pmPer my case information, it might be possible to increase the size of the objects in VEEAM/Wasabi (and thus reduce the quantity counts) assuming they have a direct correlation between object in -> object stored.
Yes, the correlation is direct.
ctg49 wrote: Jul 15, 2019 6:56 pmIs there a mapping/chart somewhere that can tell me what the difference in block sizes are between the five compression levels and the four storage optimization levels? Having this information available somewhere would help a lot when doing right-sizing on storage devices, and when troubleshooting things with vendors like Wasabi.
WAN target = 256KB
LAN target = 512KB
Local target = 1MB (default)
Local target (large blocks) = 4MB

So yes, you can reduce the number of blocks offloaded (and so objects stored in Wasabi) in a few times by switching your source jobs to use 4MB blocks. However, this will increase your incremental backup size about 2x. Not an issue for object storage consumption assuming you're offloading only full backups to S3, but you may need more storage on-prem to store those larger incrementals.
ctg49
Enthusiast
Posts: 65
Liked: 45 times
Joined: Feb 14, 2018 1:47 pm
Full Name: Chris Garlington
Contact:

Re: Wasabi bucket limitations

Post by ctg49 »

Gostev wrote: Jul 16, 2019 2:24 pmCan you clarify what exactly do you mean by "deleted data"?
In Wasabi, blocks written are kept for 90 days, regardless of deletion status. I assume this is a way to hedge costs as you're charged monthly for data whether it's deleted or not. So in our case, we've got about 80TB of active data, and 20TB of data currently in deletion status, pending each block's 90 day window (from block origin time) ticking out before it disappears.
Gostev wrote: Jul 16, 2019 2:24 pm This is certainly unique to Wasabi - and I find it really strange to be honest, because it goes against one of the main promises of the public cloud: infinite scale. I will check with my Wasabi contacts directly to see what's up with this change.
Yeah, I'm agreeing with you there. Creates a major pain point both 'in the trenches' and with management when TOS is changed after giving them our money, not to mention several weeks/months of offloading data.
Gostev wrote: Jul 16, 2019 2:24 pmYes, the correlation is direct.
Wasabi agrees with you there, via my ticket with them. Appears to be a 1:1 correlation on their side.
Gostev wrote: Jul 16, 2019 2:24 pmWAN target = 256KB
LAN target = 512KB
Local target = 1MB (default)
Local target (large blocks) = 4MB

So yes, you can reduce the number of blocks offloaded (and so objects stored in Wasabi) in a few times by switching your source jobs to use 4MB blocks. However, this will increase your incremental backup size about 2x. Not an issue for object storage consumption assuming you're offloading only full backups to S3, but you may need more storage on-prem to store those larger incrementals.
Those numbers are perfect. Does the compression level change the block sizes in any way? Or just perform compression on the data sliding into those blocks?

Hypothetically if our existing data load (which is roughly our baseline today) is at ~300m objects under WAN target sizes, altering everything to 1MB block sizes should pull us down under their new limit. 4MB would give us ample future growth as well.

What did you mean by 'only full backups' are being offloaded? Our offloads are sending fulls and incrementals, depending on when a given backup file passes through the offload window.
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Wasabi bucket limitations

Post by Gostev »

Compression is performed on the data sliding into those blocks. If anything, it will be slightly better with 4MB blocks because of the nature of compression algorithms (the more data they work against, the better data reduction ratio they can provide).

Typically, Veeam users who implement our reference architecture with primary and secondary repositories only offload GFS full backups which are created by Backup Copy jobs in the secondary repository. While those regular incremental chains are typically within operational restore windows, so they should not be offloaded to start with. And beyond operation restore windows, it usually makes no sense to keep daily incrementals - either locally, or offloaded.
ctg49
Enthusiast
Posts: 65
Liked: 45 times
Joined: Feb 14, 2018 1:47 pm
Full Name: Chris Garlington
Contact:

Re: Wasabi bucket limitations

Post by ctg49 »

Understood. Under our previous design, we maintained a forever incremental for a year back, as this was deemed to be the easiest/most space-effective way of doing reasonable point-in-time restorals for a year, as was requested by our leadership. Now that we have a better way of dealing with tons of multi-TB backups, we might shift this strategy.
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Wasabi bucket limitations

Post by Gostev »

I am in contact with Wasabi and they are reviewing this decision again. I will update once I know more, but sounds like they are reviewing multiple options, such as excluding Veeam buckets from this limitation completely based on the nature of our workload.
skrause
Veteran
Posts: 487
Liked: 105 times
Joined: Dec 08, 2014 2:58 pm
Full Name: Steve Krause
Contact:

Re: Wasabi bucket limitations

Post by skrause » 1 person likes this post

ctg49 wrote: Jul 16, 2019 4:09 pm In Wasabi, blocks written are kept for 90 days, regardless of deletion status. I assume this is a way to hedge costs as you're charged monthly for data whether it's deleted or not. So in our case, we've got about 80TB of active data, and 20TB of data currently in deletion status, pending each block's 90 day window (from block origin time) ticking out before it disappears.
FYI - When I talked to the Wasabi guys at VeeamON they said they were working on changing the 90day minimum to a 30day one. You might want to ask your Wasabi reps about if you can change to that model if you have things that have a shorter retention than 90days up there.
Steve Krause
Veeam Certified Architect
ctg49
Enthusiast
Posts: 65
Liked: 45 times
Joined: Feb 14, 2018 1:47 pm
Full Name: Chris Garlington
Contact:

Re: Wasabi bucket limitations

Post by ctg49 » 2 people like this post

skrause wrote: Jul 17, 2019 1:16 pm FYI - When I talked to the Wasabi guys at VeeamON they said they were working on changing the 90day minimum to a 30day one. You might want to ask your Wasabi reps about if you can change to that model if you have things that have a shorter retention than 90days up there.
Thanks for the suggestion, I just delivered this information to their support team and they said it was indeed being worked on for a release later this year, but he updated my account immediately. I'll monitor our deleted items count and see if it drops over the next day or few (no idea when it does scrubbing).
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Wasabi bucket limitations

Post by Gostev » 1 person likes this post

Gostev wrote: Jul 17, 2019 12:56 pmI am in contact with Wasabi and they are reviewing this decision again. I will update once I know more, but sounds like they are reviewing multiple options, such as excluding Veeam buckets from this limitation completely based on the nature of our workload.
Wasabi PM is going to post here their official decision on the original issue raised in this topic soon - stay tuned.
wasabi-jim
Technology Partner
Posts: 3
Liked: 9 times
Joined: May 08, 2017 8:43 pm
Full Name: Jim Donovan
Location: Boston, MA USA
Contact:

Re: Wasabi bucket limitations

Post by wasabi-jim » 7 people like this post

Hi Folks - Thanks for your recent input on the topic of Wasabi imposing a per-bucket object count limit. After getting your feedback and feedback from some other customers / partners, we have decided to suspend the implementation of this limit. We considered implementing this limit as a means of maximizing bucket performance but we have now come up with a better way to handle things in our bucket database infrastructure. This new approach does not require us to implement per-bucket object count limits.

On the topic the 90 day minimum storage retention policy, we are in the process of rolling out a 30 day minimum storage policy for customers where the optimal backup strategy will benefit from it. Any Veeam user that is a Wasabi customer can request a switch to the 30 day policy via a note to support@wasabi.com. As a FYI, the existence of these types of minimum storage retention policies is not unique to Wasabi (for example AWS S3 IA has a 30 day policy; AWS Glacier has a 90 day policy; AWS Glacier Deep Archive has a 180 day policy).

Many thanks to all of the Veeam users for their input on these and related Veeam + Wasabi topics.

wasabi-jim
Jim Donovan
Wasabi Product Management
ctg49
Enthusiast
Posts: 65
Liked: 45 times
Joined: Feb 14, 2018 1:47 pm
Full Name: Chris Garlington
Contact:

Re: Wasabi bucket limitations

Post by ctg49 » 1 person likes this post

Awesome news, Jim! I would say that as general feedback to Wasabi/VEEAM, it'd be great if more of this kind of information sharing/interaction happened when possible. I think most of us are in the same boat where we'd love to be 'good stewards' of our data and the service we utilize, but if it's not our solitary role, some of the minutia of how these programs and service interact with each other can get lost. Some kind of 'recommendations guide' available from Wasabi for workloads/best practices/tuning, or a 'I see you're setting up an S3/block repository, consider this for your jobs?' popup or link in VEEAM might go a long way to explaining how some of the tweaky parts of VEEAM might interact with these services in positive/negative ways.
TitaniumCoder477
Veteran
Posts: 315
Liked: 48 times
Joined: Apr 07, 2015 1:53 pm
Full Name: James Wilmoth
Location: Kannapolis, North Carolina, USA
Contact:

Re: Wasabi bucket limitations

Post by TitaniumCoder477 » 1 person likes this post

I have been considering Wasabi lately, and this from the Veeam Community Forums Digest caught my attention and caused my heart to skip a couple beats. Thank heavens it had a favorable outcome.
aman4God
Enthusiast
Posts: 25
Liked: 4 times
Joined: Feb 17, 2015 4:34 pm
Full Name: Stanton Cole
Contact:

Re: Wasabi bucket limitations

Post by aman4God » 1 person likes this post

I too have been planning to use Wasabi for cloud backup and had this not had a favorable outcome as it did, would have changed my mind as well. Thank you for everyone contributing to this topic.
Post Reply

Who is online

Users browsing this forum: sfirmes and 17 guests