-
- Novice
- Posts: 8
- Liked: never
- Joined: Apr 29, 2024 5:55 pm
- Full Name: Fer Garcia
- Contact:
Failed to delete backup Error:S3 delete multiple objects request failed to delete object
I have created one Backup Repository in a compatible object storage S3 (Huawei OBS Object Storage Service), and I can run backups from a whole linux centos machine, the backup jobs ends correclty. However when I try to delete these jobs from the Object Storage > Delete from Disk option I get this error:
[Linux Backup - x.x.x.x] Failed to delete backup Error:S3 delete multiple objects request failed to delete object [Veeam/Backup/veeam/
Clients/{68ecb7bf-71eb-4dc0-999e-d23c11da9331}/8a4413a5-ef6d-47e7-97c1-405d3853dbb6/CloudStg/Data/{eae4915d-ec16-4ce5-81d7-3755b34bb803}/{ff7ebeec-a9ca-4cf2-aa26-e556cfb8410c}/
6861_adfb7dc6063ef3215e4999c92743c163_00000000000000000000000000000
000] and [999] others, error: TimeOut, message: 'Time Out' Agent failed to process method {Cloud.DeleteBackup}.
I have increased the "Limit concurrent task to: 128" but also don't delete all the objects from such backups.
Does any one can guide me why this error is happening?
Thanks a lot guys.
[Linux Backup - x.x.x.x] Failed to delete backup Error:S3 delete multiple objects request failed to delete object [Veeam/Backup/veeam/
Clients/{68ecb7bf-71eb-4dc0-999e-d23c11da9331}/8a4413a5-ef6d-47e7-97c1-405d3853dbb6/CloudStg/Data/{eae4915d-ec16-4ce5-81d7-3755b34bb803}/{ff7ebeec-a9ca-4cf2-aa26-e556cfb8410c}/
6861_adfb7dc6063ef3215e4999c92743c163_00000000000000000000000000000
000] and [999] others, error: TimeOut, message: 'Time Out' Agent failed to process method {Cloud.DeleteBackup}.
I have increased the "Limit concurrent task to: 128" but also don't delete all the objects from such backups.
Does any one can guide me why this error is happening?
Thanks a lot guys.
-
- Product Manager
- Posts: 9519
- Liked: 2521 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Failed to delete backup Error:S3 delete multiple objects request failed to delete object
Hello,
That sounds like a technical issue. We cannot investigate such issues through a forum topic.
Please provide a support case ID for this issue, as requested when you click New Topic. Without case number, the topic will eventually be deleted by moderators.
My latest information is, that Huawei S3 is not compatible with our products because they use an incomplete implementation of the S3 API:
object-storage-as-a-backup-target-f52/u ... 56956.html
Fabian
PS: support can only help if you upload logs https://www.veeam.com/kb1832
That sounds like a technical issue. We cannot investigate such issues through a forum topic.
Please provide a support case ID for this issue, as requested when you click New Topic. Without case number, the topic will eventually be deleted by moderators.
Where do you have found this statement? That it is compatible with our products?I have created one Backup Repository in a compatible object storage S3 (Huawei OBS Object Storage Service),
My latest information is, that Huawei S3 is not compatible with our products because they use an incomplete implementation of the S3 API:
object-storage-as-a-backup-target-f52/u ... 56956.html
Best regards,Incompatible
Huawei S3 (incomplete S3 API implementation)
Fabian
PS: support can only help if you upload logs https://www.veeam.com/kb1832
Product Management Analyst @ Veeam Software
-
- Novice
- Posts: 8
- Liked: never
- Joined: Apr 29, 2024 5:55 pm
- Full Name: Fer Garcia
- Contact:
Re: Failed to delete backup Error:S3 delete multiple objects request failed to delete object
Does any one knows how to modify the timeout interval for the Veeam software to request the OBS interface and the number of objects to be deleted at a time?
-
- VP, Product Management
- Posts: 6943
- Liked: 1468 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: Failed to delete backup Error:S3 delete multiple objects request failed to delete object
I am not sure if it will address the issue completely if you change the below. We ran into some support issues with this storage that did not fully implement the S3 standard. Some of the S3 operation replies are malformed (maybe they fixed it with the latest versions, so I suggest to update to their latest software version).
If you really want to modify the Veeam settings, I suggest starting with the timeouts to see how they work.
Changing the number of concurrent multi-deletes has a negative side effect, as the storage would have to delete the same amount of data in way more S3 operations. (It does not change the fact, that you need to delete the same amount of data, which is maybe the issue here).
The offload uses Repository Task Slots. One task slot is usually used to offload a single VM. If you have an unlimited Task Slot setting within the Repository definition, change it, as it has many negative side effects outside of S3 operation, too. At the object storage definition, you can also define how many task slots are used maximum to offload data.
The math behind it is:
Number of usable task slots x 64 parallel S3 operations = Number of parallel S3 operations for put/get operations.
The number of usable task slots x 10 parallel S3 operations = the Number of parallel S3 multi-delete operations, where each multi-delete operation will delete 1000 objects.
Again, reducing the number of multi-delete operations will result in a higher total number of S3 operations. It is better to send fewer multi-delete operations deleting a higher amount of data, as Veeam waits until those are processed before the next operation is sent. In total, you delete the same amount of data in fewer S3 operations (less load on the S3 backend).
These are the settings to be set on Veeam Backup Server, the repository server (including the server that serves as SOBR extents), and the Object Storage Gateway server (if in use).
Windows:
Reg Key
HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication
S3MultiObjectDeleteLimit (DWORD)
1000
(1000 is the maximum defined by S3 standard, decreasing it will cause more S3 operations, as the same amount of objects need to be deleted in more S3 operations. If you set the value to 1, then 1000x S3 operations are sent to the object storage to achieve the same)
Linux
/etc/VeeamAgentConfig
S3MultiObjectDeleteLimit = 1000
(1000 is the maximum defined by S3 standard, decreasing it will cause more S3 operations, as the same amount of objects need to be deleted in more S3 operations. If you set the value to 1, then 1000x S3 operations are sent to the object storage to achieve the same)
Reg Key
HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication
S3RequestTimeoutSec
Type: REG_DWORD
Default value: 120
(2 minutes)
Linux
/etc/VeeamAgentConfig
S3RequestTimeoutSec = 120
If you really want to modify the Veeam settings, I suggest starting with the timeouts to see how they work.
Changing the number of concurrent multi-deletes has a negative side effect, as the storage would have to delete the same amount of data in way more S3 operations. (It does not change the fact, that you need to delete the same amount of data, which is maybe the issue here).
The offload uses Repository Task Slots. One task slot is usually used to offload a single VM. If you have an unlimited Task Slot setting within the Repository definition, change it, as it has many negative side effects outside of S3 operation, too. At the object storage definition, you can also define how many task slots are used maximum to offload data.
The math behind it is:
Number of usable task slots x 64 parallel S3 operations = Number of parallel S3 operations for put/get operations.
The number of usable task slots x 10 parallel S3 operations = the Number of parallel S3 multi-delete operations, where each multi-delete operation will delete 1000 objects.
Again, reducing the number of multi-delete operations will result in a higher total number of S3 operations. It is better to send fewer multi-delete operations deleting a higher amount of data, as Veeam waits until those are processed before the next operation is sent. In total, you delete the same amount of data in fewer S3 operations (less load on the S3 backend).
These are the settings to be set on Veeam Backup Server, the repository server (including the server that serves as SOBR extents), and the Object Storage Gateway server (if in use).
Windows:
Reg Key
HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication
S3MultiObjectDeleteLimit (DWORD)
1000
(1000 is the maximum defined by S3 standard, decreasing it will cause more S3 operations, as the same amount of objects need to be deleted in more S3 operations. If you set the value to 1, then 1000x S3 operations are sent to the object storage to achieve the same)
Linux
/etc/VeeamAgentConfig
S3MultiObjectDeleteLimit = 1000
(1000 is the maximum defined by S3 standard, decreasing it will cause more S3 operations, as the same amount of objects need to be deleted in more S3 operations. If you set the value to 1, then 1000x S3 operations are sent to the object storage to achieve the same)
Reg Key
HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication
S3RequestTimeoutSec
Type: REG_DWORD
Default value: 120
(2 minutes)
Linux
/etc/VeeamAgentConfig
S3RequestTimeoutSec = 120
-
- Service Provider
- Posts: 31
- Liked: 4 times
- Joined: Jun 27, 2022 8:12 am
- Full Name: Abdull
- Contact:
Re: Failed to delete backup Error:S3 delete multiple objects request failed to delete object
Hello Andreas,
Should we consider any version dependencies here?
Best
Abdi
Should we consider any version dependencies here?
Best
Abdi
-
- VP, Product Management
- Posts: 6943
- Liked: 1468 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: Failed to delete backup Error:S3 delete multiple objects request failed to delete object
Yes, please use latest Veeam Version as there are always enhancements in the object storage backup target area.
-
- Novice
- Posts: 8
- Liked: never
- Joined: Apr 29, 2024 5:55 pm
- Full Name: Fer Garcia
- Contact:
Re: Failed to delete backup Error:S3 delete multiple objects request failed to delete object
Hello Andreas:
I have checked on the registry but these KEYs are not registered, should I add them manually?
I have checked on the registry but these KEYs are not registered, should I add them manually?
-
- VP, Product Management
- Posts: 6943
- Liked: 1468 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
-
- Influencer
- Posts: 22
- Liked: never
- Joined: Apr 09, 2023 7:50 pm
- Contact:
Re: Failed to delete backup Error:S3 delete multiple objects request failed to delete object
Have had the exact same issue with our provided S3 storage and Veeam. Veeam has been working with us for months on this. The problem isn't as severe as previously after making some of the registry changes mentioned in the thread but it still exists.
The error shows up in the offloads and also in a object store that is part of capacity tier imported which I am deleting. For auditing purposes I was deleting the jobs and then tracking which jobs and restore points were deleted. The same error showd up in that process as well, basically I believe becasue the command sent to S3 is the same as the command done for the offloads. I'm at the point now where I have all jobs except for one job, with all servers showing 0 restore points. The interesting thing is none of the jobs (I'll call it metatdata - the job name and the list of machines in the job) would delete when I tried to delete them via the B&R console. The one job with 1 machine and 27 restore points also would not delete. We ended up doing a call with Veeam and our provider. We removed the repository from Veeam which only removes it from the console and leaves the data in the store so now I'm working with the vendor to remove the rest of the data since it could not be done through Veeam, using AWS CLI.
We still get the error in some of our offload sessions but not as frequently. In the good old days, to me this would be analagous to having a hard drive with some corrupted sectors but cloud is a different animal.
Wish I could tell you there was a solution but unfortunately I cannot.
The error shows up in the offloads and also in a object store that is part of capacity tier imported which I am deleting. For auditing purposes I was deleting the jobs and then tracking which jobs and restore points were deleted. The same error showd up in that process as well, basically I believe becasue the command sent to S3 is the same as the command done for the offloads. I'm at the point now where I have all jobs except for one job, with all servers showing 0 restore points. The interesting thing is none of the jobs (I'll call it metatdata - the job name and the list of machines in the job) would delete when I tried to delete them via the B&R console. The one job with 1 machine and 27 restore points also would not delete. We ended up doing a call with Veeam and our provider. We removed the repository from Veeam which only removes it from the console and leaves the data in the store so now I'm working with the vendor to remove the rest of the data since it could not be done through Veeam, using AWS CLI.
We still get the error in some of our offload sessions but not as frequently. In the good old days, to me this would be analagous to having a hard drive with some corrupted sectors but cloud is a different animal.
Wish I could tell you there was a solution but unfortunately I cannot.
-
- Novice
- Posts: 8
- Liked: never
- Joined: Apr 29, 2024 5:55 pm
- Full Name: Fer Garcia
- Contact:
Re: Failed to delete backup Error:S3 delete multiple objects request failed to delete object
Dear LEWISF, in my case I had good luck by making the changes on the Registry, my problem was solved:
Whether the S3MultiObjectDeleteLimit (DWORD) key is set up as 1000, this wont work, you should to set up as 100,
and the S3RequestTimeoutSec should be set up as 120.
Once you have made these changes reboot the OS, and try agian...
Hope this now works for you.
Whether the S3MultiObjectDeleteLimit (DWORD) key is set up as 1000, this wont work, you should to set up as 100,
and the S3RequestTimeoutSec should be set up as 120.
Once you have made these changes reboot the OS, and try agian...
Hope this now works for you.
-
- Veteran
- Posts: 598
- Liked: 87 times
- Joined: Dec 20, 2015 6:24 pm
- Contact:
Re: Failed to delete backup Error:S3 delete multiple objects request failed to delete object
We are also having issues with everything that is related to deletes and cleanups in capacity tier. Reducing the multiple object delete key resulted in much longer running jobs (offload as well as backup jobs that remove restore points). Reducing concurrent tasks below 10 resulted in stuck jobs and support told me to better not set any limit there with the amount of tasks that we have. This is really a major pain ( just received the 4th or 5th hofix for .56)
Who is online
Users browsing this forum: No registered users and 13 guests