Discussions related to using object storage as a backup target.
sfirmes
Veeam Software
Posts: 225
Liked: 117 times
Joined: Jul 24, 2018 8:38 pm
Full Name: Stephen Firmes
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by sfirmes » 1 person likes this post

@selva the error you are seeing is due to Backblaze is unexpected response to the particular S3 API call. @nilayp noted they have identified the issue and are currently working on it. No files are being removed by either Backblaze nor VBR, it's a non-standard response that are fixing. Sorry for any confusion.
Senior Solutions Architect, Product Management - Alliances @ Veeam Software
nilayp
Technology Partner
Posts: 9
Liked: 12 times
Joined: Jan 05, 2021 10:11 pm
Full Name: Nilay Patel
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by nilayp » 2 people like this post

@Selva says:
selva wrote: Jan 06, 2021 4:12 pm Now backblaze is recommending (as per Nilay Patel's post above) to leave the lifecycle rule at the default of "keep all versions". I will surely try this this, but that still does not explain the error we are facing.
To be super clear about the recommendation:

Lifecycle rules must be kept at "keep all versions" IF AND ONLY IF immutability is enabled.
Lifecycle rules must be kept at "keep only latest version" IF AND ONLY IF immutability is not enabled.
selva wrote: Jan 06, 2021 4:12 pm Why would an object not deleted by Veeam be removed by backblaze? If its Veeam is removing files that it should not, why is it doing that? These are the real issues, and not whether Veeam will put back a file that was accidentally deleted by the user by directly accessing the bucket.
If Lifecycle rules are set to "keep only last version", Backblaze B2 is removing old versions automatically. In the case where immutability is enabled, the lifecycle configuration could cause B2 to remove a file before Veeam does. Today, this causes the HTTP error in the logs due to the bug I mentioned in my original post. While it will get fixed shortly and the log message will disappear - the real issue... that B2 is configured to remove object versions underneath Veeam needs to be addresses to ensure restores and recoveries do not fail due to missing objects.
selva wrote: Jan 06, 2021 4:12 pm Further, I have already done rescan several times starting from the time the error was first noticed weeks ago, and that has not helped anything. Rescan always succeeds with no error or missing files reported.
Please see posts from Andreas and Steve on the proper ways to heal Scale-out Backup Repository that are in this situation. Further testing was done after my original post that implies a simple rescan doesn't solve the issue completely.
jjordan
Lurker
Posts: 1
Liked: never
Joined: Jan 07, 2021 3:01 pm
Full Name: Jared Jordan
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by jjordan »

We are experiencing the same problem with our original bucket configuration of 'Keep all version of the file (default)'. I did check the file path of one of the errors and the file did not exist in Backblaze. Something deleted it....

We need to figure this out. The reason we went to Backblaze was for the immutability.
RiteBytes
Service Provider
Posts: 4
Liked: 1 time
Joined: Jan 17, 2017 9:15 pm
Full Name: Brian Baker
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by RiteBytes »

Is it ok to change to "Lifecycle rules must be kept at "keep all versions" IF AND ONLY IF immutability is enabled" to an already in-use bucket? I have backups going with the "keep only latest versions" on a new test backup. It may not have run long enough to run into issues with deleting items.

I ran into the same issue as OP a few months back, I tried to work with Veeam support, but the process was taking too long and I gave up. (I'm out-sourced IT and can't spend billable hours working on issues like this). Hopefully these tweaks get this working better, as my clients will benefit from this.
nilayp
Technology Partner
Posts: 9
Liked: 12 times
Joined: Jan 05, 2021 10:11 pm
Full Name: Nilay Patel
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by nilayp »

Hi jjordan,
jjordan wrote: Jan 07, 2021 4:23 pm I did check the file path of one of the errors and the file did not exist in Backblaze. Something deleted it....
When you say one of the "errors" - I assume you mean "DeleteMultipleObjects request failed to delete object [] error: NoSuchVersion, message: 'Invalid version id specified'", correct?

If you have Lifecycle Rules setup to "Keep all versions of the file," files will only be removed when Veeam issues a delete command. It is possible that Veeam issues a delete command and the delete was successful, but Veeam didn't have the opportunity to record the successful delete. Then, Veeam will simply try to delete the file again.

This can happen for a variety of reasons, especially with an endpoint like DeleteMultipleObjects. As an example, let's say the Veeam client sends a list of 100 objects to delete and before the delete is complete, the Internet connection breaks. Backblaze B2 may have deleted some of the files, but not necessarily all of them. And the Veeam client has no way to knowing which were deleted and which were not. Therefore, the Veeam client will simply try to delete the objects again. As I mentioned previously, the Backblaze B2 functionality differs a bit from the S3 protocol, and that is why you see the NoSuchVersion error. Backblaze is committed to fix this and the log messages will disappear.

My understanding is that this error does not cause a backup job from failing. If you are experiencing errors in completing the backup jobs OR if restore/recoveries are failing against backups where this error is found, please let us know.

I hope that helps? If not - please feel free to reply OR open a ticket with Backblaze and make sure you mention Nilay sent you there so I can keep my eye on the ticket.

-- Nilay Patel
VP of Sales & Solution Engineering
nilayp
Technology Partner
Posts: 9
Liked: 12 times
Joined: Jan 05, 2021 10:11 pm
Full Name: Nilay Patel
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by nilayp »

Hi RiteBytes,
RiteBytes wrote: Jan 07, 2021 4:28 pm Is it ok to change to "Lifecycle rules must be kept at "keep all versions" IF AND ONLY IF immutability is enabled" to an already in-use bucket?
No, unfortunately you cannot. To enable immutability in Veeam, you have to enable "Object Lock" in a B2 bucket. This can only be done on bucket creation.

-- Nilay
RiteBytes
Service Provider
Posts: 4
Liked: 1 time
Joined: Jan 17, 2017 9:15 pm
Full Name: Brian Baker
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by RiteBytes »

Sorry I wasn't clear. I already have the immutability turned on, but the lifecycle rule set to "keep only latest versions". I want to know if its ok to change the lifecycle rules on a bucket holding existing backups.
selva
Enthusiast
Posts: 73
Liked: 7 times
Joined: Apr 07, 2017 5:30 pm
Full Name: Selva Nair
Location: Canada
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by selva »

nilayp wrote: Jan 07, 2021 9:49 pm This can happen for a variety of reasons, especially with an endpoint like DeleteMultipleObjects. As an example, let's say the Veeam client sends a list of 100 objects to delete and before the delete is complete, the Internet connection breaks. Backblaze B2 may have deleted some of the files, but not necessarily all of them. And the Veeam client has no way to knowing which were deleted and which were not. Therefore, the Veeam client will simply try to delete the objects again. As I mentioned previously, the Backblaze B2 functionality differs a bit from the S3 protocol, and that is why you see the NoSuchVersion error. Backblaze is committed to fix this and the log messages will disappear.
Thanks for this clarification.
nilayp wrote: Jan 07, 2021 9:49 pm My understanding is that this error does not cause a backup job from failing. If you are experiencing errors in completing the backup jobs OR if restore/recoveries are failing against backups where this error is found, please let us know.
This is not the case. Offloading to capacitty tier does fail on this error and, in my case, the offload is now behind 25 restore points. If that was not the case this thread wouldn't have existed. I have been working with Veeam support for over 2 weeks now and still waiting for a solution. The backup to performance extents continue without error, but that is no consolation.

So, once again, I'm not complaining about logging of some seemingly innocuous error. My problem is about SOBR offload failing due to the "DeleteMultiplObjects" error. Based on communication with Veeam support and responses here, I got the impression that the problem is now understood, and a fix is forthcoming. But now it seems that is not the case.
nilayp wrote: Jan 06, 2021 4:54 pm Please see posts from Andreas and Steve on the proper ways to heal Scale-out Backup Repository that are in this situation. Further testing was done after my original post that implies a simple rescan doesn't solve the issue completely.
Could you please point me to those posts by Andreas and Steve where this process is described? I could find none.
lethallynx
Influencer
Posts: 24
Liked: 8 times
Joined: Aug 17, 2009 3:47 am
Full Name: Justin Kirkby
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by lethallynx »

nilayp wrote: Jan 05, 2021 10:58 pm
If a customer wants to tier backups to Backblaze B2 WITH immutability enabled, they must NOT use Lifecycle rules. Veeam will manage the deletion of object versions as necessary. Lifecycle rules should remain at the default, "Keep all versions of the file."
We created a brand new bucket which only Veeam accesses.
Immutability was enabled before and data was transfered.
It has always been set to Lifecycle Settings: Keep all versions of the file (default)

Yet we still are getting the errors?
"DeleteMultipleObjects request failed to delete object [] error: NoSuchVersion, message: 'Invalid version id specified'"
Andreas Neufert
VP, Product Management
Posts: 6707
Liked: 1401 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by Andreas Neufert » 1 person likes this post

As shared above this is because backblaze give Veeam not the expected answers for that situation. Backblaze shared above that they will address this and patch it.
sfirmes
Veeam Software
Posts: 225
Liked: 117 times
Joined: Jul 24, 2018 8:38 pm
Full Name: Stephen Firmes
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by sfirmes » 1 person likes this post

@lethallynx Earlier in this forum posting @nilayp noted Backblaze has identified the issue and are working on resolving it:
nilayp wrote: Jan 05, 2021 10:58 pm Nilay from Backblaze here.

As for the HTTP errors on subsequent DeleteObjectVersion operations that appear in the Veeam logs, this is caused due to an inconsistency between AWS S3's DeleteObjectVersion and the corresponding API in Backblaze B2. An issue has been filed with Backblaze engineering [DEV-6848] to fix this inconsistency.
It is due to Backblaze's inconsistent implementation of the AWS Delete-Objects API which is causing you to see the "NoSuchVersion, message: 'Invalid version id specified" error.

As you and others are experiencing, this error will cause the SOBR offload job to fail. Once Backblaze has implemented their fix for this error, the SOBR offload job(s) will be able to copy/move the data from the performance tier to the capacity tier.

Hope this helps explain what you are seeing.

Steve
Senior Solutions Architect, Product Management - Alliances @ Veeam Software
selva
Enthusiast
Posts: 73
Liked: 7 times
Joined: Apr 07, 2017 5:30 pm
Full Name: Selva Nair
Location: Canada
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by selva »

Backblaze's handling of this has been pathetic. It seems to have been clear for a while now that the problem is their S3 API returning non-compliant response to the deletion error. Instead of focussing on it and fixing it, they have been giving inconsistent and apparently irrelevant advice to change the lifecycle settings. After all this, they still haven't realized this is affecting our backups?
nilayp wrote: Jan 07, 2021 9:49 pm My understanding is that this error does not cause a backup job from failing. If you are experiencing errors in completing the backup jobs OR if restore/recoveries are failing against backups where this error is found, please let us know.
(see @nilayp 's last post linked to above)

Before immutability was supported, it took me 45 days to get them to review their advice on lifecycles settings and finally admit that it should be set to "keep only the latest version". Else storage was not getting released when objects are cleared by Veeam (Backblaze support case: 583535). Now they say with immutability on, the lifecycle should be left at its bizarre default of "keep all versions". I'm afraid, a couple of months down the line they may come back and change that again as their rationale for that advice makes little sense.

I think its time to switch back to Amazon before the disk usage doubles and backup starts failing again and it takes another month-long campaign to get all parties to act.
Gostev
Chief Product Officer
Posts: 31459
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by Gostev » 1 person likes this post

To be fair, it's a brand new integration and you're among the early adopters. It's practically expected for early adopters to struggle with the teething issues, so by becoming one you're knowingly taking this risk. I mean, can you think of one piece of software or hardware that has been flawless from the start, and did not require a number of fixes and updates? Me neither :D

There's no doubt Amazon has a more mature S3 implementation. They created this API to start with, and had it in production for almost 15 years now... unlike Backblaze's own S3 interface released just last year.

Look at the positive side too though: we have Backblaze VP of Engineering answering in this topic. This shows they can't take the issue more seriously, and are committed to ensuring that the Veeam customers are successful with Backblaze. And for me personally, such attention from the senior management is always a very important factor when choosing a vendor for the long-term partnership.
gtelnet
Service Provider
Posts: 40
Liked: 16 times
Joined: Mar 28, 2020 3:50 pm
Full Name: GregT
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by gtelnet »

Well said @Gostev. You and your team's involvement is greatly appreciated as well.
nilayp
Technology Partner
Posts: 9
Liked: 12 times
Joined: Jan 05, 2021 10:11 pm
Full Name: Nilay Patel
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by nilayp »

Hi Ritebytes,
RiteBytes wrote: Jan 07, 2021 11:36 pm I already have the immutability turned on, but the lifecycle rule set to "keep only latest versions". I want to know if its ok to change the lifecycle rules on a bucket holding existing backups.
Yes, please change the lifecycle rules back to "Keep All Versions" and then follow the guidance from Steve (elsewhere in this thread) to repair the capacity tier offload.
nilayp
Technology Partner
Posts: 9
Liked: 12 times
Joined: Jan 05, 2021 10:11 pm
Full Name: Nilay Patel
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by nilayp » 2 people like this post

Hi Selva,
selva wrote: Jan 08, 2021 7:15 pm Backblaze's handling of this has been pathetic.
It seems to have been clear for a while now that the problem is their S3 API returning non-compliant response to the deletion error. Instead of focussing on it and fixing it, they have been giving inconsistent and apparently irrelevant advice to change the lifecycle settings. After all this, they still haven't realized this is affecting our backups?
I understand your frustration. I want to assure you that we had escalated the incompatibility issue that was causing the "Invalid version id specified" error in the Veeam console and logs. This was filed as bug DEV-6848 and I can confirm as of 01-12-2021 @ 9:17am, this bug has been fixed and deployed. Veeam customers should no longer be getting this error message. The DeleteObjects API endpoint in Backblaze B2 works identically to the one in AWS S3.

Our understanding of this issue in talking with multiple customers and Veeam is while the error was appearing, it wasn't resulting in failed offloads (with the exception of one customer). And, it exposed the having Lifecycle rules enabled incorrectly could be removing objects that Veeam is relying on to perform restore and recoveries.

I'm sorry if that nuance that one issue (reports of "Invalid version id specified") lead us to realize a more important issue (lifecycle rules set incorrectly could risk restores/recoveries) was not clearly described.

selva wrote: Jan 08, 2021 7:15 pm Before immutability was supported, it took me 45 days to get them to review their advice on lifecycles settings and finally admit that it should be set to "keep only the latest version". Else storage was not getting released when objects are cleared by Veeam (Backblaze support case: 583535). Now they say with immutability on, the lifecycle should be left at its bizarre default of "keep all versions". I'm afraid, a couple of months down the line they may come back and change that again as their rationale for that advice makes little sense.
I assure you that the guidance won't change. When you have immutability enabled, Veeam is expecting an object store that supports Versioning. In this case, Veeam is responsible for deleting all objects from the object store. When immutability is disabled, Veeam is expecting an object store with Versioning disabled. In the case of Backblaze B2, versioning cannot be disabled. Therefore, Veeam is responsible for "hiding" objects that are no longer necessary and Lifecycle rules in Backblaze B2 for deleting those hidden files.

Once again - I'm not trying to make excuses for Backblaze. Some of the things I describe above took us some time to learn and be able to communicate with customers. Software also has bugs that need to be fixed. When we come across these complicated situations, we jump on them as quickly as possible. We also pride ourselves on being an open and transparent company. In the future, if you don't believe we are living up to these pledges - please open a ticket and drop my name in it. I'll make sure we do.

And, thank you @Gostev for the kind words. I can assure you that Veeam is a very important integration for Backblaze and we are very serious about ensuring it works. Many, many customers are depending on the solution and we will treat each report seriously.

-- Nilay
VP of Sales and Solution Engineering
nilayp
Technology Partner
Posts: 9
Liked: 12 times
Joined: Jan 05, 2021 10:11 pm
Full Name: Nilay Patel
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by nilayp » 1 person likes this post

To the whole forum, that may not be paying attention to all the back and forth in all comments:

That issue causing the error "Invalid version id specified" in the Veeam console and logs has been fixed and deployed as of 01-12-2021 @ 9:17am. Veeam customers should no longer be getting this error message. The DeleteObjects API endpoint in Backblaze B2 works identically to the one in AWS S3.

If you continue to see this error in your logs - please open a ticket with Backblaze support and we will investigate.

-- Nilay
VP of Sales and Solution Engineering
Andreas Neufert
VP, Product Management
Posts: 6707
Liked: 1401 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by Andreas Neufert »

Thank you for the update Nilay.
gtelnet
Service Provider
Posts: 40
Liked: 16 times
Joined: Mar 28, 2020 3:50 pm
Full Name: GregT
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by gtelnet »

I confirmed that I know longer see the "Invalid version id specified" error but it does not appear to have resolved the below error. Does Backblaze or Veeam have an update on this error? We enabled keep all versions was several days ago. Is this data lost because of the bug? If so, is there a way to determine which servers these objects belong to so that we can determine if they are still recoverable? Thank you.

Veeam case # 04515058 - Backblaze case # 627761

1/12/2021 1:09:47 PM :: Amazon REST error: 'S3 error: Bucket name does not have file: path/objectname.blk
Code: InvalidArgument', error code: 400
Other:
nilayp
Technology Partner
Posts: 9
Liked: 12 times
Joined: Jan 05, 2021 10:11 pm
Full Name: Nilay Patel
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by nilayp » 1 person likes this post

Hello gtelnet,

If you had Lifecycle Rules set to "keep most recent version" at some point in the past - changing it to "keep all versions" will prevent objects from being deleted underneath Veeam in the future. However, there may be objects deleted underneath Veeam in the past while "keep most recent versions" was enabled. That error message certainly suggests this is what happened.

Steve from Veeam has given guidance for repairing this earlier in this forum post (See: post396558.html#p396558). My recommendation: open a ticket with Veeam support. They will understand your environment and give you guidance on how to heal your performance tier from capacity tier for your specific configuration.
selva
Enthusiast
Posts: 73
Liked: 7 times
Joined: Apr 07, 2017 5:30 pm
Full Name: Selva Nair
Location: Canada
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by selva » 1 person likes this post

I'm a happy camper again: Thanks.
nilayp wrote: Jan 12, 2021 5:55 pm And, thank you @Gostev for the kind words. I can assure you that Veeam is a very important integration for Backblaze and we are very serious about ensuring it works. Many, many customers are depending on the solution and we will treat each report seriously.
Thanks for that reassurance, and above all thanks for putting up with my constant pestering. In the end, its really good news that backblaze has adjusted the API. And, indeed, my stalled SOBR offloads have started working from 1pm EST today. It will take a while to complete the backlog but progressing very well.

One of the greatest attractions of Veeam is the access to many senior engineers and executives right here in the forum in addition to the robustness of the program itself. Seeing that level of attention from backblaze is not lost on me, though for a moment I lost it in frustration. I do appreciate the positives that @Gostev rightly extolled.
selva
Enthusiast
Posts: 73
Liked: 7 times
Joined: Apr 07, 2017 5:30 pm
Full Name: Selva Nair
Location: Canada
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by selva »

Although the error is fixed there seems to a be a problem. I've written to the support engineer, but hoping someone may spot something I might have done wrong.

My SOBR offload restarted with 32 pending restore points to be offloaded but "stalled" after transferring one incremental -- no progress after that for hours. I stopped the tiering job and started another, this time it figured the first point is transferred and started transferring the second one and again stalled after completing it. The pattern repeats. The UI shows nothing useful, log file shows its waiting for some completion with a line written to the logs every 30 minutes.

On the backup repository server, the data mover process is running and tcpdump shows its still connected to the S3 end point (backbalze) and sends a few packets per minute. Sign of some expected response not being received? I don't think I changed anything on this job since opening the case 3 weeks ago.
selva
Enthusiast
Posts: 73
Liked: 7 times
Joined: Apr 07, 2017 5:30 pm
Full Name: Selva Nair
Location: Canada
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by selva »

My SOBR offload restarted with 32 pending restore points to be offloaded but "stalled" after transferring one incremental -- no progress after that for hours.
I shepherded it through this behaviour by manually restarting the tiering when it appears to stall. After doing that for about 4 restore points, now it has picked up steam and is progressing well with several incrementals transferred without showing any long waits in between.

That makes me suspect some Cleanup step with capacity tier is probably very slow in some situations: may be trigerred only when there are old backlogs as happened in my case. Though a legitimate slowdown doesn't explain why restart of the job always helped it complete quickly.

This post
clintbergman wrote: Nov 11, 2020 3:30 pm Long delays during SOBR Offload tasks
discusses a somewhat similar situation with long waits at ArchRepo.ArchiveCleanup, though not the same issue.

Anyway, looks good for now, and my support engineer is also pleased to hear that :)
lethallynx
Influencer
Posts: 24
Liked: 8 times
Joined: Aug 17, 2009 3:47 am
Full Name: Justin Kirkby
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by lethallynx » 1 person likes this post

gtelnet wrote: Jan 12, 2021 7:57 pm I confirmed that I know longer see the "Invalid version id specified" error but it does not appear to have resolved the below error. Does Backblaze or Veeam have an update on this error? We enabled keep all versions was several days ago. Is this data lost because of the bug? If so, is there a way to determine which servers these objects belong to so that we can determine if they are still recoverable? Thank you.

Veeam case # 04515058 - Backblaze case # 627761

1/12/2021 1:09:47 PM :: Amazon REST error: 'S3 error: Bucket name does not have file: path/objectname.blk
Code: InvalidArgument', error code: 400
Other:
Yep looks like I am getting exactly the same error on my end.
Gostev
Chief Product Officer
Posts: 31459
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by Gostev » 1 person likes this post

I understand these are expected aftershocks from before you had "Keep all versions" enabled. You will see this error each time whenever Veeam attempts to delete some block according to the retention policy, but the block was already removed earlier by the Backblaze's lifecycle management policy on the bucket.
gtelnet
Service Provider
Posts: 40
Liked: 16 times
Joined: Mar 28, 2020 3:50 pm
Full Name: GregT
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by gtelnet »

The Veeam support engineer had me delete the /mnt/repo/backup/ArchiveIndex folder from each of my extents and then run a rescan of each sobr, to rebuild the ArchiveIndex. Details on the archive index can be found here https://helpcenter.veeam.com/docs/backu ... ml?ver=100.

After the rescan is done, offloads will fail with warnings for 24 hours, as per the Eventual Consistency model of Amazon S3, as discussed in above link. Will post an update tomorrow.

PS- I made sure to stop all active offloads, prior to performing above.
tgx
Enthusiast
Posts: 31
Liked: 7 times
Joined: Feb 11, 2019 6:17 pm
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by tgx »

As I understand it, BB is deleting the files when BB Lifecycle settings are errantly enabled while using Veeam immutability(self-inflicted due to lack of knowledge), should be a warning maybe near the Veeam immutability setting. Thus Veeam thinks those files should still be there per its lifecycle rules. When Veeam goes to purge those files according to its rules the files are now missing and it pops a 'delete multiple' error. The question now is once you revert BB lifecycle settings back to 'Keep All Versions', how can you tell Veeam to fix the issue of the missing files so it no longer pops the 'delete multiple'. That is the situation I have at the moment. Need some sort of re-synch button in the Veeam GUI or right click menu option.
gtelnet
Service Provider
Posts: 40
Liked: 16 times
Joined: Mar 28, 2020 3:50 pm
Full Name: GregT
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by gtelnet »

Deleting the /ArchiveIndex folder has not helped and has caused a new error, which we are trying to resolve by running the sobr rescan over and over. There are a few offload sessions that still give the same cannot find file error when trying to delete an object, but the majority all show:
1/19/2021 11:09:02 AM :: Local index is not synchronized with object storage, please rescan the scale-out backup repository
Had webex session with a Veeam engineer today and he said he will escalate to dev. Will keep you posted.
veremin
Product Manager
Posts: 20270
Liked: 2252 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by veremin »

I've asked QA team to follow the support investigation - will let you know, once I have more information. Thanks!
veremin
Product Manager
Posts: 20270
Liked: 2252 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: persistent offload error to capacity tier (backblaze)

Post by veremin »

The support investigation has confirmed the findings above, the main reasons for the experienced issues have been:

• The fact that versioning was not enabled upon object storage repository registration and has been enabled sometime after that
• The fact that lifecycle rules were set to “keep only latest version”, while immutability feature was enabled

Since neither of these configurations are supported, right now you will have to start over again, unfortunately.

Thanks!
Post Reply

Who is online

Users browsing this forum: No registered users and 10 guests