Discussions related to using object storage as a backup target.
Post Reply
backupquestions
Expert
Posts: 186
Liked: 21 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Integrity and corruption guard question

Post by backupquestions »

Imagine you have a local repo that a backup copy job is written to. You have configured GFS retention for 4 weeklies, 12 monthlies, and 7 yearlies.

Now, you have this repo extended with cloud tier and all the GFS points data is in the cloud with only the *2 simple retention points* local.

Now, someday there is some sort of corruption on the backup files on this repo, or even in the ones out in the object storage.

My understanding is that this is all just incremental forever technically even though chains are sealed because it is all REFS block cloning for the GFS points and then they were offloaded to object storage. Everything still hinges on the original local chain 1 full plus 1 inc (back to the original simple retention 2)

So if corruption happens and it could possibly ruin all the long term points, can this all be fixed with storage guard running on the local simple retention points and then those blocks go out to object storage to repair it out there too?

I'm hoping it isn't some kind of nightmare deal where there is no way to repair all the GFS points since they are now out in the cloud etc.
jmmarton
Veeam Software
Posts: 2092
Liked: 309 times
Joined: Nov 17, 2015 2:38 am
Full Name: Joe Marton
Location: Chicago, IL
Contact:

Re: Integrity and corruption guard question

Post by jmmarton » 1 person likes this post

Keep in mind the GFS points are standalone synthetic fulls, and the first one moved to Capacity Tier involves moving *all* of the data blocks. From that point forward is when dedupe kicks in. But this means that what's stored in Capacity Tier is in no way dependent upon what's stored in the Performance Tier. Thus even if you lose the entire Performance Tier, the data stored in Capacity Tier is still available.

From a reliability standpoint within the cloud that depends on the cloud provider. For example with S3, the data is copied across a minimum of three Availability Zones within a region. Azure Blob has redundancy built-in, with the level redundancy dependent upon the policy you select but all offering a minimum of eleven 9's of availability.

The point is there really shouldn't be a need to run any sort of health check or anything on the objects that have been written to object storage, at least not with the public cloud providers.

Joe
Gostev
Chief Product Officer
Posts: 31456
Liked: 6647 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Integrity and corruption guard question

Post by Gostev »

Here's the similar discussion from a few months ago.
backupquestions
Expert
Posts: 186
Liked: 21 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Integrity and corruption guard question

Post by backupquestions »

Thank you for the info.
Post Reply

Who is online

Users browsing this forum: No registered users and 14 guests