According to Amazon, AWS S3 buckets size growth is unlimited.
Source: https://aws.amazon.com/s3/faqs/#:~:text ... f%205%20TB.
According to the Veeam BP we are recommended to “using multiple buckets”.
Source: https://bp.veeam.com/vbcloud/guide/aws/ ... orage.html
I read on the net that although some cloud providers claim unlimited bucket size this does come with a performance penalty at some point. (Sorry I lost my source)
My question to the Veeam community is if there are any real world numbers on practical bucket sizes before splitting up into multiple buckets. I have no refence of scale here. I mean, is a 400TB bucket ridicules or common practice?
-
- Enthusiast
- Posts: 32
- Liked: 6 times
- Joined: Apr 05, 2023 1:06 pm
- Full Name: maanlicht
- Contact:
-
- Chief Product Officer
- Posts: 31707
- Liked: 7212 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Practical limits to AWS buckets
It scales pretty well indeed but you will want to stay under 1PB per AWS S3 bucket with the default block size (1 MB) as we have seen issues with one of our customers as they started reaching this value. And Amazon told them this was due to the sheer number of objects in a single bucket. So if you have more data than say 500TB, just use a scale-out repository with multiple buckets registered so you don't have to worry about this later.
"Some cloud providers" will likely have even lower limit as few have their S3 engine as polished and mature as AWS S3. I think those claims of an unlimited bucket size come primarily from their SMB focus, where even 100TB equals "unlimited" even just because data footprint of typical SMB customers and physical limits of their Internet connection speed.
"Some cloud providers" will likely have even lower limit as few have their S3 engine as polished and mature as AWS S3. I think those claims of an unlimited bucket size come primarily from their SMB focus, where even 100TB equals "unlimited" even just because data footprint of typical SMB customers and physical limits of their Internet connection speed.
-
- Enthusiast
- Posts: 32
- Liked: 6 times
- Joined: Apr 05, 2023 1:06 pm
- Full Name: maanlicht
- Contact:
Re: Practical limits to AWS buckets
Thanks Gostev, that is good information.
I dont think Ill exceed the 500TB anytime soon so one bucket should be fine then. However for legal reasons I am required to store multiple datasets each with different retention/imutability requirements.
In this forum post here a similar question is asked: object-storage-f52/multiple-sobrs-with- ... 67285.html
Are the practical performance limits you mentioned the same for using multiple repositories on the same bucket since immutability is controlled on the repository level?
I dont think Ill exceed the 500TB anytime soon so one bucket should be fine then. However for legal reasons I am required to store multiple datasets each with different retention/imutability requirements.
In this forum post here a similar question is asked: object-storage-f52/multiple-sobrs-with- ... 67285.html
Are the practical performance limits you mentioned the same for using multiple repositories on the same bucket since immutability is controlled on the repository level?
-
- Chief Product Officer
- Posts: 31707
- Liked: 7212 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Practical limits to AWS buckets
Yes, they are the same: the only thing that matters from object storage scalability perspective is the total number of object in a bucket.
Who is online
Users browsing this forum: No registered users and 4 guests