-
- Service Provider
- Posts: 14
- Liked: never
- Joined: Aug 11, 2017 9:09 pm
- Full Name: Matt Burnette
- Contact:
5TB size limit on S3
I am looking to place some large backups into S3 using the new integration.
We have several large file servers with several volumes as large as 16TBs each.
A full for one of these servers is about 56TBs.
I know S3 has a 5TB limit per object and I was wondering if anyone has ran into a problem putting large backups like these into S3.
Thanks!
We have several large file servers with several volumes as large as 16TBs each.
A full for one of these servers is about 56TBs.
I know S3 has a 5TB limit per object and I was wondering if anyone has ran into a problem putting large backups like these into S3.
Thanks!
-
- Product Manager
- Posts: 20406
- Liked: 2299 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: 5TB size limit on S3
Our object storage integration doesn't move backup files as a whole to object storage, instead it operates on backup file blocks level and offloads individual blocks. More information can be found here
The default backup file block size is 1MB, and typical compression ratio is 2:1, so average S3 block size with Veeam is 512KB - well below 5TB limit
Thanks!
The default backup file block size is 1MB, and typical compression ratio is 2:1, so average S3 block size with Veeam is 512KB - well below 5TB limit
Thanks!
-
- Service Provider
- Posts: 14
- Liked: never
- Joined: Aug 11, 2017 9:09 pm
- Full Name: Matt Burnette
- Contact:
Re: 5TB size limit on S3
Thanks Vladimir!
I posted here since I got different answers from support and sales engineers.
The servers we are wanting to offsite belong to our client who is a law firm.
They would like to keep the data for 'forever'.
We have a few fears about the architecture of using S3 as the offsite storage location.
Are there any fears for the chain to ever go corrupt by using S3 with this much data in a 'forever' capacity?
Is there a way to determine how long it would take to get this data into S3 other than guessing?
Would the SOBR Offload Job be paused if the Backup job was ran? (This would increase the time for the data to get offsite)
Is there any best practice for how the jobs should be setup? (WAN/LAN/LOCAL/16TB+, compression, each server in thier own job/combine all servers, don't enable compact fulls, etc)
In addition, we also only want to to 'offsite' certain drives on these servers and not the OS drives.
To us, it appears we will have to create a dedicated SOBR for just those drives/jobs, then add S3 storage to that SOBR.
Is there any other tricks where we might be able to select or exclude what goes to S3 without having to make a new SOBR.
As a note, these are only 6-7 servers with around 100 TBs of data total potentially being copied to S3.
This is a lot of data for us, and we just want to get it offsite as fast and efficiently as possible without waiting months for it to finish, or find we messed up and need to redo it somehow.
I posted here since I got different answers from support and sales engineers.
The servers we are wanting to offsite belong to our client who is a law firm.
They would like to keep the data for 'forever'.
We have a few fears about the architecture of using S3 as the offsite storage location.
Are there any fears for the chain to ever go corrupt by using S3 with this much data in a 'forever' capacity?
Is there a way to determine how long it would take to get this data into S3 other than guessing?
Would the SOBR Offload Job be paused if the Backup job was ran? (This would increase the time for the data to get offsite)
Is there any best practice for how the jobs should be setup? (WAN/LAN/LOCAL/16TB+, compression, each server in thier own job/combine all servers, don't enable compact fulls, etc)
In addition, we also only want to to 'offsite' certain drives on these servers and not the OS drives.
To us, it appears we will have to create a dedicated SOBR for just those drives/jobs, then add S3 storage to that SOBR.
Is there any other tricks where we might be able to select or exclude what goes to S3 without having to make a new SOBR.
As a note, these are only 6-7 servers with around 100 TBs of data total potentially being copied to S3.
This is a lot of data for us, and we just want to get it offsite as fast and efficiently as possible without waiting months for it to finish, or find we messed up and need to redo it somehow.
-
- Chief Product Officer
- Posts: 31807
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: 5TB size limit on S3
In that case, I highly recommend you consider tape. "100TB forever" is one of the few use cases that tape is perfect for, and it will be by an order of magnitude cheaper and faster solution comparing to S3.matt.burnette wrote: ↑Apr 16, 2019 8:29 pmwe just want to get it offsite as fast and efficiently as possible without waiting months for it to finish
-
- Service Provider
- Posts: 40
- Liked: 1 time
- Joined: May 13, 2013 2:32 am
- Location: Brisbane
- Contact:
[MERGED] Object Storage for Big VMs
Looks at object storage, but we have a few big VMs, (Backup files are 20TB+)
looking at s3 compatible options having file size limits of 5TB or 10TB, how are people getting around this limits?
Or do they just not put their big VMs on capacity tier?
looking at s3 compatible options having file size limits of 5TB or 10TB, how are people getting around this limits?
Or do they just not put their big VMs on capacity tier?
-
- Product Manager
- Posts: 14840
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Object Storage for Big VMs
Hello,
they are getting around this by using Veeam
There is no issue with Veeam because the backup format for object storage is completely different than for local backups. You will see tons of small backup files instead of one big backup file in object storage.
Best regards,
Hannes
they are getting around this by using Veeam
There is no issue with Veeam because the backup format for object storage is completely different than for local backups. You will see tons of small backup files instead of one big backup file in object storage.
Best regards,
Hannes
-
- Veeam Software
- Posts: 2097
- Liked: 310 times
- Joined: Nov 17, 2015 2:38 am
- Full Name: Joe Marton
- Location: Chicago, IL
- Contact:
Re: 5TB size limit on S3
Actually you will see tons of small objects. Not tons of small backup files
Joe
Joe
-
- Service Provider
- Posts: 40
- Liked: 1 time
- Joined: May 13, 2013 2:32 am
- Location: Brisbane
- Contact:
Re: 5TB size limit on S3
Cheers, that makes sense,
just wanted to confirm
just wanted to confirm
Who is online
Users browsing this forum: No registered users and 10 guests