-
- Service Provider
- Posts: 42
- Liked: 19 times
- Joined: Mar 28, 2020 3:50 pm
- Full Name: GregT
- Contact:
Re: persistent offload error to capacity tier (backblaze)
Veremin,
When we deployed the Backblaze bucket back in October of last year, the online directions (which have since been fixed) specifically said to select the "keep only the latest version" which is why several of the early adopters had it set that way. I also had several calls with a Veeam engineer and several Veeam employees to review the configuration, through a consulting engagement that Veeam provided us as part of becoming a Veeam partner, to make sure it was setup correctly.
Being that we have 8 months of data offloaded to the bucket, which does not exist anywhere else, can you please provide us with information on how to "start over" without losing 8 months of backups? Perhaps re-adding the bucket to a new sobr would allow it to sync up properly? Or do we need to download all of the data from the bucket to the repo and then upload it to a new bucket? I understand that the technical configuration is no longer supported, but Veeam helped get us here, so we would greatly help Veeam getting us back to working order.
Thanks!
case # 04515058
When we deployed the Backblaze bucket back in October of last year, the online directions (which have since been fixed) specifically said to select the "keep only the latest version" which is why several of the early adopters had it set that way. I also had several calls with a Veeam engineer and several Veeam employees to review the configuration, through a consulting engagement that Veeam provided us as part of becoming a Veeam partner, to make sure it was setup correctly.
Being that we have 8 months of data offloaded to the bucket, which does not exist anywhere else, can you please provide us with information on how to "start over" without losing 8 months of backups? Perhaps re-adding the bucket to a new sobr would allow it to sync up properly? Or do we need to download all of the data from the bucket to the repo and then upload it to a new bucket? I understand that the technical configuration is no longer supported, but Veeam helped get us here, so we would greatly help Veeam getting us back to working order.
Thanks!
case # 04515058
-
- Chief Product Officer
- Posts: 31805
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: persistent offload error to capacity tier (backblaze)
Just to be clear: did you consult with anyone at Veeam prior to enabling immutability?gtelnet wrote: ↑Jan 22, 2021 9:19 pmWhen we deployed the Backblaze bucket back in October of last year, the online directions (which have since been fixed) specifically said to select the "keep only the latest version" which is why several of the early adopters had it set that way. I also had several calls with a Veeam engineer and several Veeam employees to review the configuration, through a consulting engagement that Veeam provided us as part of becoming a Veeam partner, to make sure it was setup correctly.
-
- Chief Product Officer
- Posts: 31805
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: persistent offload error to capacity tier (backblaze)
Download and offload again should definitely work imho, but let's have Vladimir confirm this with our QC.gtelnet wrote: ↑Jan 22, 2021 9:19 pmBeing that we have 8 months of data offloaded to the bucket, which does not exist anywhere else, can you please provide us with information on how to "start over" without losing 8 months of backups? Perhaps re-adding the bucket to a new sobr would allow it to sync up properly? Or do we need to download all of the data from the bucket to the repo and then upload it to a new bucket? I understand that the technical configuration is no longer supported, but Veeam helped get us here, so we would greatly help Veeam getting us back to working order.
-
- Service Provider
- Posts: 42
- Liked: 19 times
- Joined: Mar 28, 2020 3:50 pm
- Full Name: GregT
- Contact:
Re: persistent offload error to capacity tier (backblaze)
The bucket was created October 13th 2020 with immutability enabled from the start, a few days after Veeam/Backblaze announced the support for the flag (I believe October 9th) which is the reason we created the B2 bucket. The 8 months of data in the bucket is the data that was offloaded when we created the capacity tier and selected "all" (May through October), as well as the newer backups from November/Dec/January.
EDIT: I won't include names here, but yes, we did have help from Veeam engineers when we did this, who are actually still involved trying to help, which is much appreciated!
EDIT: I won't include names here, but yes, we did have help from Veeam engineers when we did this, who are actually still involved trying to help, which is much appreciated!
-
- Service Provider
- Posts: 42
- Liked: 19 times
- Joined: Mar 28, 2020 3:50 pm
- Full Name: GregT
- Contact:
Re: persistent offload error to capacity tier (backblaze)
And thank you for staying involved, Gostev. Huge fan of your presentations and knowledge!!
-
- Chief Product Officer
- Posts: 31805
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: persistent offload error to capacity tier (backblaze)
OK. Sounds like whoever advising you from Veeam made recommendations with the assumption that Backblaze S3 object storage is identical to Amazon S3. Honestly speaking, you would probably get the same recommendation even from me if you were to ask here on the forums, because I also did not know about all their peculiarities back then. As I've said earlier... teething issues
Thanks for your kind words!
Thanks for your kind words!
-
- Service Provider
- Posts: 42
- Liked: 19 times
- Joined: Mar 28, 2020 3:50 pm
- Full Name: GregT
- Contact:
Re: persistent offload error to capacity tier (backblaze)
I'm used to technology being frustrating, especially when you're one of the first to use a new feature. I harbor no hard feelings against anyone who gave me the recommendations, just those that leave me holding the bag.
As I review my notes from the original installation, I'm realizing it could be quite possible that the directions we followed had predated the immutability flag support from October 9th, so while myself and the Veeam engineers all followed the directions, I think they may not have been updated yet to include the immutability support only 4 days later.
As I review my notes from the original installation, I'm realizing it could be quite possible that the directions we followed had predated the immutability flag support from October 9th, so while myself and the Veeam engineers all followed the directions, I think they may not have been updated yet to include the immutability support only 4 days later.
-
- Service Provider
- Posts: 42
- Liked: 19 times
- Joined: Mar 28, 2020 3:50 pm
- Full Name: GregT
- Contact:
Re: persistent offload error to capacity tier (backblaze)
No matter how many times I rescan the sobr, I still get error "Local index is not synchronized with object storage, please rescan the scale-out backup repository" for hundreds of the offload tasks but some do work properly.
The step that created this issue is when Veeam support had me remove /ArchiveIndex from the linux xfs extent. Another Veeam engineer said I might need to remove that folder from both the linux extent and the S3 bucket, to force a full rebuild of the index. Is this something we should try? Thank you!
The step that created this issue is when Veeam support had me remove /ArchiveIndex from the linux xfs extent. Another Veeam engineer said I might need to remove that folder from both the linux extent and the S3 bucket, to force a full rebuild of the index. Is this something we should try? Thank you!
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: persistent offload error to capacity tier (backblaze)
Confirmed, the following looks like the safest option:Download and offload again should definitely work imho, but let's have Vladimir confirm this with our QC.
- Put Capacity extent into sealed mode
- Download all backup chains, using Object Storage node. Due to versioning enabled some of the backup chains and their metadata files might have become inconsistent (error similar to "Bucket name does not have file: path/object") - mark those chains and, then, ask support team for assistance to clear them
- Add new object storage repository, using different bucket
- Replace old Capacity extent with the new one
- Move downloaded backup chains to the new Capacity extent
Thanks!
-
- Service Provider
- Posts: 42
- Liked: 19 times
- Joined: Mar 28, 2020 3:50 pm
- Full Name: GregT
- Contact:
Re: persistent offload error to capacity tier (backblaze)
Thank you, veremin!
We have two sobr's, one of them is behind cloud connect gateways, so there are no backup jobs/cloud objects for us to perform the download for. What would be the similar process for cloud connect backups at an MSP?
The second sobr is not behind cloud gateways so I tried the download procedure. Some worked and some gave below error which seems to correlate to the jobs where the ArchiveIndex is out of sync.
EDIT: I don't see an ArchiveIndex folder in the bucket, so not sure if renaming it is even an option.
We have two sobr's, one of them is behind cloud connect gateways, so there are no backup jobs/cloud objects for us to perform the download for. What would be the similar process for cloud connect backups at an MSP?
The second sobr is not behind cloud gateways so I tried the download procedure. Some worked and some gave below error which seems to correlate to the jobs where the ArchiveIndex is out of sync.
The step that created the index being out of sync is when Veeam support had me remove /ArchiveIndex from the linux xfs extent. Another Veeam engineer said I might need to remove the archiveindex folder from both the linux extent and the S3 bucket, to force a full rebuild of the index. Is this something we should try? Any other suggestions to get the index back in sync? Thank you!Failed to download backup file chain from object storage Error: Amazon REST error: 'S3 error: Key not found
Code: NoSuchKey', error code: 404
Other:
Unable to retrieve next block transmission command. Number of already processed blocks: [80].
Exception from server: Amazon REST error: 'S3 error: Key not found
Code: NoSuchKey', error code: 404
Other:
EDIT: I don't see an ArchiveIndex folder in the bucket, so not sure if renaming it is even an option.
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: persistent offload error to capacity tier (backblaze)
Clicking on tenant and selecting download option.What would be the similar process for cloud connect backups at an MSP?
As to 404 error, I cannot comment on what actions have led to the described behaviour, but the error itself suggests that some blocks are missing from the corresponding backup chain (this is what I've referred to as inconsistent chains in my previous reply), thus, the backups cannot be retrieved.
Thanks!
-
- Service Provider
- Posts: 42
- Liked: 19 times
- Joined: Mar 28, 2020 3:50 pm
- Full Name: GregT
- Contact:
Re: persistent offload error to capacity tier (backblaze)
Any suggestions on how to get the ArchiveIndex properly synced again? Prior to support having us remove that folder, every full server restore of backups moved to Backblaze were 100% successful, even the ones that we were getting the error for DeleteMultipleObjects error for. If we can get the ArchiveIndex fixed, then we can move ahead with downloading all the data and starting a new bucket. Thanks again!
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: persistent offload error to capacity tier (backblaze)
The archiveindex error is the side issue, the main one indicates that the backup chain is inconsistent and required blocks are missing - at least this is what QA team have concluded after investigating the case.
-
- Enthusiast
- Posts: 51
- Liked: 61 times
- Joined: Feb 11, 2019 6:17 pm
- Contact:
Re: persistent offload error to capacity tier (backblaze)
What I have done to clear the error is clone the original backup job, then delete the original backup job that has the 'delete multiple' error.veremin wrote: ↑Jan 25, 2021 2:07 pm Confirmed, the following looks like the safest option:
- Put Capacity extent into sealed mode
- Download all backup chains, using Object Storage node. Due to versioning enabled some of the backup chains and their metadata files might have become inconsistent (error similar to "Bucket name does not have file: path/object") - mark those chains and, then, ask support team for assistance to clear them
- Add new object storage repository, using different bucket
- Replace old Capacity extent with the new one
- Move downloaded backup chains to the new Capacity extent
Thanks!
This causes the data to be categorized under 'Disks Imported'. When the new backup jobs run, the delete multiple error is cleared. The old backup data is retained although in what state one cannot say. Presumably some of the data is recoverable but data relying on deleted incrementals may not be. This does, however ,allow you to begin again with some possibility of data recovery until the data is aged out.
-
- Technology Partner
- Posts: 9
- Liked: 12 times
- Joined: Jan 05, 2021 10:11 pm
- Full Name: Nilay Patel
- Contact:
Re: persistent offload error to capacity tier (backblaze)
Nilay from Backblaze here again. That error is my old enemy that we successfully vanquished.
What we discovered when debugging the original issue described in this thread is Veaam is requesting a specific object using the object's name and versionId. For whatever reason, the version being requested has been deleted. I can't speak to why it has been removed in your case, but our exploration pointed at a combination of things:
1) Backblaze B2 had a bug (which affected a small number of customers and was fixed as documented above) that could cause a file uploaded by Veeam to be stored in B2 twice. That would have created two versions of the object. This happened incredibly infrequently, but it did happen due to the bug.
2) Some of our customers had configured B2 lifecycle rules to remove old versions of a file.
So, if you happen to have been a customer that was affected by the bug & had enabled lifecycle rules, two objects were created on an upload and Veeam had stored the versionId of one of them in their database. If that versionId was for the first uploaded object and lifecycle rules came along and removed the first uploaded object... Veeam wouldn't know. Veeam's database had a reference to the removed object that no longer existed and would then throw this error on recovery and restores. Backblaze did come up with a solution for resolving this for affected customers.
I hope that helps, even if it's not directly related to your issue.
-- Nilay
----------
Nilay Patel
VP of Sales & Solution Engineering, Backblaze
What we discovered when debugging the original issue described in this thread is Veaam is requesting a specific object using the object's name and versionId. For whatever reason, the version being requested has been deleted. I can't speak to why it has been removed in your case, but our exploration pointed at a combination of things:
1) Backblaze B2 had a bug (which affected a small number of customers and was fixed as documented above) that could cause a file uploaded by Veeam to be stored in B2 twice. That would have created two versions of the object. This happened incredibly infrequently, but it did happen due to the bug.
2) Some of our customers had configured B2 lifecycle rules to remove old versions of a file.
So, if you happen to have been a customer that was affected by the bug & had enabled lifecycle rules, two objects were created on an upload and Veeam had stored the versionId of one of them in their database. If that versionId was for the first uploaded object and lifecycle rules came along and removed the first uploaded object... Veeam wouldn't know. Veeam's database had a reference to the removed object that no longer existed and would then throw this error on recovery and restores. Backblaze did come up with a solution for resolving this for affected customers.
I hope that helps, even if it's not directly related to your issue.
-- Nilay
----------
Nilay Patel
VP of Sales & Solution Engineering, Backblaze
Who is online
Users browsing this forum: No registered users and 16 guests