Discussions related to using object storage as a backup target.
Post Reply
dss
Lurker
Posts: 1
Liked: never
Joined: Sep 21, 2022 8:34 am
Full Name: MP
Contact:

S3 Object Storage Question

Post by dss »

Hello,
I am an IT consultant working with Veeam B&R in multiple sites.

As now the bandwidth is increasing faster in our area, I'm setting up S3 Storage for multiple customers.

I tryied using SOBR with Performance Tier (local) and Capacity Tier (S3) with a retention time of 30 days and object lock of 30 days as well. The setup is to copy backups to object storage as soon as they are created.

The configuration for the Performance Tier is synthetic full

I understand that the object storage if forever incremental, yet I see that if I browse the object storage, I can see the synthetic full.
So does this mean that older data is already being deleted from S3 and the synthetic full is performed via API in S3? If the answer is yes, how is this possible if the repository is encrypted?

Also how do I set the GFS so that after these 30 days retention, everything goes to Glacier? And how do I make sure I delete the old backup, let's say after 6 months?

Last question, how can I be sure, or test, that the backup is not corrupt? Is there any automatic test we can enable?
I mean, forever incremental means one problem in the chain and every backup will be corrupt, right?

Thanks BR
HannesK
Product Manager
Posts: 14287
Liked: 2877 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: S3 Object Storage Question

Post by HannesK »

Hello,
and welcome to the forums.

"synthetic fulls" in object storage are space-less (same as on REFS / XFS). The sticky FAQ has some explanations around the most common questions.

For GFS restore points to glacier, please see the user guide

Backups are deleted according to you retention settings. If you configure 6 months in the GFS settings, then (hopefully :-)) every backup software will delete it after the configured time.

Note: using AWS glacier with only 6 months retention will probably cost you more than just using S3 / S3 infrequent access.

Health checks are available in the job settings. For the cloud storage: for today, you will have to trust 12 or more nines of durability of the cloud provider, or download all data / restore to somewhere. In future, we will offer options to also do health checks for data on object storage (that will create costs with most cloud providers)

Best regards,
Hannes
Post Reply

Who is online

Users browsing this forum: HiChris and 6 guests