Discussions related to using object storage as a backup target.
Post Reply
pat_ren
Service Provider
Posts: 140
Liked: 35 times
Joined: Jan 02, 2024 9:13 am
Full Name: Pat
Contact:

Advice on immutable s3 storage with a 'no delete' s3 bucket policy

Post by pat_ren » 1 person likes this post

I've been doing some research on this and in theory it seems possible, but i haven't found any info from Veeam that backs this up. Hoping I can get some advice.

I was recently involved in helping to restore a client who was breached and the attackers were able to access the Veeam server, and extract the goldmine of info it contains, such as wasabi access keys. The backups were immutable, but the attackers were still able to try and 'delete' the backups, placing delete markers on every object (6mill+ objects) so this took some time to undo before I could assist with the recovery. I have built scripts to help with this, capable of running with many parallel threads to delete objects etc. but initially listing the objects to get versions is the slowest part of this process. I can use daily inventory of the s3 storage to possibly help with this, but it's limited to once a day/week and could be stale when it's needed the most.

I've been considering if there is more that can be done to secure immutable storage and lock things down further. Immutable storage is mandatory now, but it isn't perfect. If the s3 bucket policy used by Veeam to connect to a bucket allows deletes and an attacker can get those keys, the data can be compromised. No data will be lost, but it will slow down recovery.

Veeam's own documentation dictates that s3:deleteObject and s3:deleteObjectVersion are required. This makes sense since Veeam needs to be able to manage objects and retention.
https://helpcenter.veeam.com/docs/backu ... ml?ver=120

I have been looking into if there is a way to configure these policies without allowing any s3:delete permissions, and using another method to expire old objects.

Obviously the best solution to this problem is making sure no bad actors can get to the Veeam server in the first place (new v13 appliance looking interesting for this) - however, assuming a capable advanced threat actor and a perfect storm zero day scenario (as I was recently involved with) - any extra security I can implement would be of value. Cyber-attacks are getting more and more sophisticated, attackers are using more advanced tools (and AI tools) and nothing can be left to chance anymore. They will find a hole if it exists.

Please let me know if anyone has tried anything like this with their access policies and if so, was it successful? Thanks
Mildur
Product Manager
Posts: 10910
Liked: 2984 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Advice on immutable s3 storage with a 'no delete' s3 bucket policy

Post by Mildur » 1 person likes this post

Hi Pat,

As you mentioned, s3:deleteObject and s3:deleteObjectVersion permissions are mandatory; otherwise, your backup server will not be able to delete outdated objects. If the backup server can't clean up old objects, your bucket could become a mess very quickly.

And deleting backup objects outside of the Veeam application is not supported. Even if it were, it would be impossible without accessing the metadata on the backup server to identify which objects need to be deleted.

Best,
Fabian
Product Management Analyst @ Veeam Software
pat_ren
Service Provider
Posts: 140
Liked: 35 times
Joined: Jan 02, 2024 9:13 am
Full Name: Pat
Contact:

Re: Advice on immutable s3 storage with a 'no delete' s3 bucket policy

Post by pat_ren » 1 person likes this post

Mildur wrote: Sep 09, 2025 6:52 am Hi Pat,

As you mentioned, s3:deleteObject and s3:deleteObjectVersion permissions are mandatory; otherwise, your backup server will not be able to delete outdated objects. If the backup server can't clean up old objects, your bucket could become a mess very quickly.

And deleting backup objects outside of the Veeam application is not supported. Even if it were, it would be impossible without accessing the metadata on the backup server to identify which objects need to be deleted.

Best,
Fabian
Thanks Fabian, I thought that would be the case but never hurts to ask, I'm sure if there was a more secure way to do it, Veeam would share it.

I have been looking at other options for further securing access and found that using aws:SourceIp conditions would be an additional layer of security we can add, on top of everything else that can be done to lock things down.
https://docs.wasabi.com/v1/docs/how-to- ... ip-address?

That could be worth adding to the Veeam KB/suggestions, as in the scenario I described above, attackers were able to extract s3 access keys but did not do anything with them many hours later, and the eventual API delete requests can from a foreign IP. Limiting access to known IPs is one more way to protect the data.
gtelnet
Service Provider
Posts: 65
Liked: 29 times
Joined: Mar 28, 2020 3:50 pm
Full Name: Greg Tellone - Cloud IBR
Contact:

Re: Advice on immutable s3 storage with a 'no delete' s3 bucket policy

Post by gtelnet » 1 person likes this post

Pat,

We recently saw the same exact issue you did and Linux Security Hardened Repositories and Cloud Connect saved the day. All "direct to object backups" and SOBR offloads to object storage were corrupted by the hackers by deleting the metadata, which is not immutable. The immutable objects are all still there, untouched, but inaccessible by Veeam. The script provided by Veeam to remove the delete markers has been running for two weeks now with no end in sight. Keep in mind that some object storage vendors only keep 30 days of versioning by default, so if you don't repair the bucket within those 30 days, I'd imagine the bucket is destroyed.

Restricting by IP will help in your exact situation where the second wave of the attack is launched remotely, but once the hackers realize this, they can just run the delete command on the bucket directly from the VBR that they've already compromised, since it's IP will obviously be in the allowed list.

To make a bold statement, what I've surmised from this event is that there are only three ways to secure your backup data with Veeam.

1. Security Linux Hardened Repositories with local storage, NOT SAN storage (the NetApp LUN snapshots were also deleted in the case we worked). Make sure they are TRULY hardened, i.e. DO NOT run them as virtual machines or have ILO/DRAC connected. The only security risk is someone physically onsite. And for the love of God, everyone please stop using Windows/REFS for backup repositories.

2. Send it to a Cloud Connect partner who hosts Security Linux Hardened Repositories, since the customer's on-prem VBR doesn't have delete privileges to the remote data until immutability expires. The partner can create a SOBR and offload it to object storage or Veeam Vault as well, using keys that only the partner has.

3. Send it to a Cloud Connect partner who forwards the backups to remote object storage using Veeam's capability of either "Connection through a gateway server" or "Direct connection" with "Security Provided by IAM/STS object storage capabilities" as described in this link. Both of these methods prevent the on-prem VBR from storing the keys locally. Combining this with the IP restrictions you wrote about should provide a much greater level of security, but not all object storage vendors support IP restrictions. I don't think Veeam Vault does either at this point. https://helpcenter.veeam.com/docs/backu ... ct-storage

In any of above cases, the hacker will not have delete rights to the data, preventing the issue you saw, unless they hack both the customer and the backup provider at the same time.

We stopped selling "direct to object" because of this security flaw, especially since the backup server is always one of the main targets in these events.

Long live Cloud Connect!
gtelnet
Service Provider
Posts: 65
Liked: 29 times
Joined: Mar 28, 2020 3:50 pm
Full Name: Greg Tellone - Cloud IBR
Contact:

Re: Advice on immutable s3 storage with a 'no delete' s3 bucket policy

Post by gtelnet » 1 person likes this post

Also going to plug our product Cloud IBR https://cloudibr.com that we used for the recovery. We used it to stand up all servers that were sent to Cloud Connect and offloaded to object storage, since those weren't affected by the hackers. We disabled outbound internet for the recovery environment, monitored all outbound connections, blocked malicious IP's provided by the SIEM provider and gradually opened up outbound internet. We had the ~25TB environment up and running in ~12 hours.

And a great learning lesson - our product restores your latest recovery point and many people have asked us, "When we recover with Cloud IBR, how do we pick a restore point that happened before the hackers infiltrated", but that's impossible to know until forensics are completed, so you'll need to login to the recovered servers to do forensics. In the case we worked, the initial infection took place on June 17, 2025, 70 days prior to the final attack. Up until the day of the attack, SentinelOne nor Fluency SIEM saw any suspicious behavior (we are working with both of those companies to help them improve).

So our answer is to simply login to their Cloud IBR portal, click Recover, let it recover the latest backups, disable outbound internet and begin forensics. In light of this situation, we'll be adding a button to our recovery screen to disable outbound internet that says "If you are recovering from a ransomware attack, we highly suggest you disable outbound internet to perform forensics".
Post Reply

Who is online

Users browsing this forum: No registered users and 11 guests