Discussions related to using object storage as a backup target.
Post Reply
shebang
Novice
Posts: 6
Liked: never
Joined: Apr 28, 2022 7:56 am
Contact:

S3 Compatible Storage Capacity Tier Seeding

Post by shebang »

Dear community

I am looking to leverage private S3 object storage (it is certified as Veeam Ready) but the repository is in a remote location. Therefore, I need to seed the data, similar to Data Box etc. However, since this is an S3-compatible repository, there is no procedure or method from Veeam that illustrates how to seed the data via an intermediary device, e.g NAS, SAN etc.

Ideally I would like to backup to my performance tier as normal, then seed to a block device (e.g. NAS), ship this into the S3 target and copy the data. Am I correct in thinking seeding to a block device is not possible because Veeam writes data differently between block (performance) and object storage? I.e. I can't simply copy the vibs etc. verbatim to the object storage repository.

If that's the case, do I need a seeding device that is also S3-based, so that the performance tier data is copied to the capacity tier (the S3-based seed device), and then use S3 tools to copy the objects into the remote object storage repository and then re-point the capacity tier to the remote object storage repository?
Mildur
Product Manager
Posts: 8549
Liked: 2223 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: S3 Compatible Storage Capacity Tier Seeding

Post by Mildur »

Hi
Am I correct in thinking seeding to a block device is not possible because Veeam writes data differently between block (performance) and object storage?
In the performance tier, we store the restore points as backup files.

On Object Storage, we don't store the restore points as backup files. We offload unique blocks as objects. The first restore Point for each Job will be a full offload, and then it's only forever incremental. Only changed blocks will be transferred to object storage.
Can you tell me why do you need to seed the data? If it's because of the available bandwidth and amount of Data? Will the throughput be enough after the initial seed to offload all changed blocks?
If that's the case, do I need a seeding device that is also S3-based, so that the performance tier data is copied to the capacity tier (the S3-based seed device), and then use S3 tools to copy the objects into the remote object storage repository and then re-point the capacity tier to the remote object storage repository?
We support seeding only to Azure or Amazon.

You can try seeding with other S3 Compatible provider, but first give it a test it with small amount of data (a few small vms). I don't know if seeding is going to work for every object storage provider. Make sure that versioning and object lock are not enabled for the new bucket.
I would do this steps (I don't have tested them)
1) Add your temporary "migration bucket" to the SOBR
2) Use Copy Mode to offload all restore points
3) Transfer the device to the remote location
4) copy the objects with the same structure to the new bucket
5) add the bucket to veeam
6) Change the SOBR configuration to use the new bucket as the capacity tier
Product Management Analyst @ Veeam Software
shebang
Novice
Posts: 6
Liked: never
Joined: Apr 28, 2022 7:56 am
Contact:

Re: S3 Compatible Storage Capacity Tier Seeding

Post by shebang »

We need to seed because of the lack of bandwidth during the initial full backup. We expect deduplication and incremental backups to reduce the subsequent backups. As a side question, are incremental backups sent in 'full fat', i.e. without source-side deduplication?

Back to seeding, the 'migration bucket' you refer to needs to be object storage. I.e. the staging device? That's going to be tricky because our seeding devices are typically NAS devices that are portable.
Mildur
Product Manager
Posts: 8549
Liked: 2223 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: S3 Compatible Storage Capacity Tier Seeding

Post by Mildur »

We need to seed because of the lack of bandwidth during the initial full backup. We expect deduplication and incremental backups to reduce the subsequent backups. As a side question, are incremental backups sent in 'full fat', i.e. without source-side deduplication?
An incremental Backup is already compressed and veeam deduped on the performance tier. So the entire incremental backup size will be offloaded as unique blocks to the capacity tier. And if you create another synthetic or active full, veeam will not offload the entire Fullbackup size as objects. Only unique blocks within a backup chain are offloaded. Object Storage is always forever forward incremental.

If you want to know more how the data is stored in object storage, have a look at this page.
Back to seeding, the 'migration bucket' you refer to needs to be object storage. I.e. the staging device? That's going to be tricky because our seeding devices are typically NAS devices that are portable.
Yes, you can't use block storage. You must have a s3 compatible object storage.
Product Management Analyst @ Veeam Software
Post Reply

Who is online

Users browsing this forum: No registered users and 6 guests