- Posts: 186
- Liked: 21 times
- Joined: Mar 13, 2019 2:30 pm
- Full Name: Alabaster McJenkins
Now, you have this repo extended with cloud tier and all the GFS points data is in the cloud with only the *2 simple retention points* local.
Now, someday there is some sort of corruption on the backup files on this repo, or even in the ones out in the object storage.
My understanding is that this is all just incremental forever technically even though chains are sealed because it is all REFS block cloning for the GFS points and then they were offloaded to object storage. Everything still hinges on the original local chain 1 full plus 1 inc (back to the original simple retention 2)
So if corruption happens and it could possibly ruin all the long term points, can this all be fixed with storage guard running on the local simple retention points and then those blocks go out to object storage to repair it out there too?
I'm hoping it isn't some kind of nightmare deal where there is no way to repair all the GFS points since they are now out in the cloud etc.
- Veeam Software
- Posts: 2081
- Liked: 309 times
- Joined: Nov 17, 2015 2:38 am
- Full Name: Joe Marton
- Location: Chicago, IL
From a reliability standpoint within the cloud that depends on the cloud provider. For example with S3, the data is copied across a minimum of three Availability Zones within a region. Azure Blob has redundancy built-in, with the level redundancy dependent upon the policy you select but all offering a minimum of eleven 9's of availability.
The point is there really shouldn't be a need to run any sort of health check or anything on the objects that have been written to object storage, at least not with the public cloud providers.
Users browsing this forum: No registered users and 7 guests