
Let me start from the beginning!
I have a 2.7PB cluster Ceph (Pacific release) which expose the object storage using the integrated RadosGW, so I can use it as the offload storage for the Veeam's backup jobs.
As radosgw frontend I tried civetweb and beast (with the same results).
I'm using the immutability API function (and at the very beginning I re-compiled Ceph to fix by myself the issue with the timestamp format... but my merge request is not the one which the community integrate in the official release of Ceph.. Same issue, similar solutions, but mine was not so elegant, maybe

Almost everything works fine, even if Ceph and the S3 API implementation is not certified by Veeam, apart for an issue that Veeam support could not solve and I'm trying to solve by myself.
Let me explain better:
- offload to s3 works great so I can put data in buckets with the right expiration date
- list of files in buckets works great so I can recover data from buckets
The only problem is related with the multiple object delete request that Veeam at the end of the job. If I run the job manually, it could delete (better, tag to delete) the oldest files but when the scheduler runs the same job, it couldn't delete the oldest files cause "unknown error".. During my investigation I found that the "delete" operation is an API put request which contain a list of maximum 1000 (by Veeam default) path to sign as deletable after the immutability expired.
I tried with less files in the bulk request (playing with the Veeam's registry keys) but the error is always the same: unknown error [and the same timeout].
I know that this is a Ceph issue (better: a rados gateway issue..) and I'm pretty sure that someone hit the same issue and maybe had solve it a lot of time ago..well, I'm here to ask your help

I can produce a lot of logs and metrics and logs but I thing the issue is related with some settings of beast (or civetweb but I'm using beast at this moment)..
Thanks to everyone and sorry for my bad english..
Fabio
