-
- Lurker
- Posts: 1
- Liked: never
- Joined: Dec 16, 2020 1:52 pm
- Full Name: Claudio Steiner
- Contact:
Multiple S3 buckets in Capacity Tier in SOBR
Hi
Cloudian S3 only supports buckets up to 20 TB.
In large environments, the 20TB are quickly reached.
It would be good if you could specify multiple S3 buckets in the SOBR. So you could keep the 20TB limit per bucket.
Thank you and Regards
Cloudian S3 only supports buckets up to 20 TB.
In large environments, the 20TB are quickly reached.
It would be good if you could specify multiple S3 buckets in the SOBR. So you could keep the 20TB limit per bucket.
Thank you and Regards
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
Hi, Claudio!
Wow, I did not realize this was the case with Cloudian.
But yes, we have this feature on our short-term roadmap (post v11) for the similar reason: even Amazon S3 seem to start struggling when approaching 1PB of data in the single bucket, even if officially there's no documented limits to the bucket size. So, we plan to support multiple buckets with some simple round-robin logic.
Thanks!
Wow, I did not realize this was the case with Cloudian.
But yes, we have this feature on our short-term roadmap (post v11) for the similar reason: even Amazon S3 seem to start struggling when approaching 1PB of data in the single bucket, even if officially there's no documented limits to the bucket size. So, we plan to support multiple buckets with some simple round-robin logic.
Thanks!
-
- Veeam Software
- Posts: 296
- Liked: 141 times
- Joined: Jul 24, 2018 8:38 pm
- Full Name: Stephen Firmes
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
@claudio. Cloudian doesn't have a limit on their bucket sizes, but they do recommend many small buckets to spread the workload across. So the 20TB size may be what they recommend for your environment.
Steve Firmes | Senior Solutions Architect, Product Management - Alliances @ Veeam Software
-
- Service Provider
- Posts: 453
- Liked: 30 times
- Joined: Dec 28, 2014 11:48 am
- Location: The Netherlands
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
Nice topic, what are there any disadvantages regarding capacity usage when using buckets larger than 20 TB ?
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
I assume Cloudian starts having some performance issues.
-
- Service Provider
- Posts: 453
- Liked: 30 times
- Joined: Dec 28, 2014 11:48 am
- Location: The Netherlands
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
Thanks, that would be a logical answer
When having a Cloudian solution providing 300 TB usable capacity :
- what should we consider regarding configuration of backup jobs in Veeam pointing to a SOBR ?
- Is there any relationship between the blocksize chosen in the backup job and the object size that is being used by a Cloudian environment ?
I can imagine that the object size dictates the real usable capacity within a cloudian configuration.
When having a Cloudian solution providing 300 TB usable capacity :
- what should we consider regarding configuration of backup jobs in Veeam pointing to a SOBR ?
- Is there any relationship between the blocksize chosen in the backup job and the object size that is being used by a Cloudian environment ?
I can imagine that the object size dictates the real usable capacity within a cloudian configuration.
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
The block size chosen in the backup job settings will be the object size that is being used by a Cloudian environment.
-
- Service Provider
- Posts: 454
- Liked: 86 times
- Joined: Jun 09, 2015 7:08 pm
- Full Name: JaySt
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
sorry to kick this topic but i'm looking for some clarification to justify design choices with cloudian.
Is this 20TB still a suggested limit and is it documented anywhere?
I think it's a very low limit, considering object storage association with "very good scalability". I too have a project for (starting with)300TB cloudian space and really would not like to have to cut this 300TB in 10+ buckets if i dont have to.
What would the actual problem be beyond 20TB? (looking for a bit more info than "performance issues" ).
Yes, im also trying to get some explaination from Cloudian on this, but maybe someone here has a good view on these (from field experience perhaps).
BTW, i checked the VeeamOn2022 sessions around object storage. General recommendations for bucketsize limits (onprem object stores) was around 50TB (field expecience?).
I found that limit a bit low as well, but it is what it is. I have bit of a hard time with getting this explained, but that's more because of the perception of object storage not having limitations at all, certainly not at the numbers of 20TB of 50TB. I can show some people some calculations for the object count using specific blocksizes with Veeam, but seeing those (big) numbers are just... well... big numbers. Understanding why those (big) numbers would be a problem (and would result in a limit being there) is another story.
I also looked at the documented limits for Minio, just as an example.
They do not seem to have bucket size limits or object count limits, only a object size limit. I'm sceptical about those "no limit" claims there, after reading up about these best practices and even AWS having issues at some point (bucketsize).
Could be there's no technical limit (just because you can does not mean you should), but there's a big gap between 20TB and "no limit". Veeam environments run into any limit under 100TB pretty fast in my bubble.
https://docs.min.io/docs/minio-server-l ... enant.html
Is this 20TB still a suggested limit and is it documented anywhere?
I think it's a very low limit, considering object storage association with "very good scalability". I too have a project for (starting with)300TB cloudian space and really would not like to have to cut this 300TB in 10+ buckets if i dont have to.
What would the actual problem be beyond 20TB? (looking for a bit more info than "performance issues" ).
Yes, im also trying to get some explaination from Cloudian on this, but maybe someone here has a good view on these (from field experience perhaps).
BTW, i checked the VeeamOn2022 sessions around object storage. General recommendations for bucketsize limits (onprem object stores) was around 50TB (field expecience?).
I found that limit a bit low as well, but it is what it is. I have bit of a hard time with getting this explained, but that's more because of the perception of object storage not having limitations at all, certainly not at the numbers of 20TB of 50TB. I can show some people some calculations for the object count using specific blocksizes with Veeam, but seeing those (big) numbers are just... well... big numbers. Understanding why those (big) numbers would be a problem (and would result in a limit being there) is another story.
I also looked at the documented limits for Minio, just as an example.
They do not seem to have bucket size limits or object count limits, only a object size limit. I'm sceptical about those "no limit" claims there, after reading up about these best practices and even AWS having issues at some point (bucketsize).
Could be there's no technical limit (just because you can does not mean you should), but there's a big gap between 20TB and "no limit". Veeam environments run into any limit under 100TB pretty fast in my bubble.
https://docs.min.io/docs/minio-server-l ... enant.html
Veeam Certified Engineer
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
I would suggest that you to check directly with Cloudian on their current limits. They are the ultimate source for such information and it is safe to assume they did not sit still since this topic was created but kept improving their solution...
-
- Service Provider
- Posts: 454
- Liked: 86 times
- Joined: Jun 09, 2015 7:08 pm
- Full Name: JaySt
- Contact:
-
- Service Provider
- Posts: 454
- Liked: 86 times
- Joined: Jun 09, 2015 7:08 pm
- Full Name: JaySt
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
i got an answer from cloudian about the limits mentioned here.
the latest cloudian release, released in june, has no limits on object count or total size of a bucket anymore. This is due to internal optimizations.
However, from a performance standpoint, a 100 Million objects per bucket and max bucket size of 250TB is recommended at this time.
the latest cloudian release, released in june, has no limits on object count or total size of a bucket anymore. This is due to internal optimizations.
However, from a performance standpoint, a 100 Million objects per bucket and max bucket size of 250TB is recommended at this time.
Veeam Certified Engineer
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
So 100 Million objects x 512 KB average object size with the default block size setting = 50 TB real limit with Veeam?
Or 200TB when using 4MB object blocks, both limits align nicely this way (not that I would recommend using 4MB blocks).
Or 200TB when using 4MB object blocks, both limits align nicely this way (not that I would recommend using 4MB blocks).
-
- Service Provider
- Posts: 454
- Liked: 86 times
- Joined: Jun 09, 2015 7:08 pm
- Full Name: JaySt
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
yes, i just made the same calculation a moment ago
seems that the "safe" limit of 50TB mentioned by Hannes on Veeamon indeed is in line with limits mention by cloudian. However, they dont mention these numbers as strict limits, but as a recommendation mainly because of performance. Probably performance arround bulk deletes.
Curious though. Why would you not recommend 4MB even though it's onprem? (no large transfer disadvantages compared to leveraging AWS/Azure, right?)
seems that the "safe" limit of 50TB mentioned by Hannes on Veeamon indeed is in line with limits mention by cloudian. However, they dont mention these numbers as strict limits, but as a recommendation mainly because of performance. Probably performance arround bulk deletes.
Curious though. Why would you not recommend 4MB even though it's onprem? (no large transfer disadvantages compared to leveraging AWS/Azure, right?)
Veeam Certified Engineer
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
Storage consumption. Using 4MB blocks approximately doubles the incremental backup size comparing to 1MB blocks, which for forever-incremental backup in turn means close to double total disk space consumption by your backups. Huge difference...
-
- Service Provider
- Posts: 454
- Liked: 86 times
- Joined: Jun 09, 2015 7:08 pm
- Full Name: JaySt
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
to be clear about what you mean: and that's because when using larger blocks (4MB) in Veeam, CBT will also report larger blocks during incrementals that need to be send to the target?
Veeam Certified Engineer
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
CBT has nothing to deal with this. But otherwise you are right: Veeam will operate with this block size during both full and incremental backups, so whenever some little data changes on the disk, we will grab 4MB chunk of disk image that surrounds the changed data (instead of just 1MB) and this is what will need to be sent to and stored on the target.
Some times this won't add any overhead, because changed blocks are backing a large file. And other times it will result in lots of overhead, because even when only one small document of some hundreds KBs in size changes, Veeam will have to grab the entire 4MB chunk of a disk image surrounding the changed area.
In our testing with regular VMs, we saw on average 2x increase in incremental backup sizes when switching from 1MB to 4MB block size.
Some times this won't add any overhead, because changed blocks are backing a large file. And other times it will result in lots of overhead, because even when only one small document of some hundreds KBs in size changes, Veeam will have to grab the entire 4MB chunk of a disk image surrounding the changed area.
In our testing with regular VMs, we saw on average 2x increase in incremental backup sizes when switching from 1MB to 4MB block size.
-
- Service Provider
- Posts: 454
- Liked: 86 times
- Joined: Jun 09, 2015 7:08 pm
- Full Name: JaySt
- Contact:
Re: Multiple S3 buckets in Capacity Tier in SOBR
understood! thanks for clarifying.
Veeam Certified Engineer
Who is online
Users browsing this forum: No registered users and 5 guests