Discussions related to using object storage as a backup target.
Post Reply
pat_ren
Service Provider
Posts: 133
Liked: 31 times
Joined: Jan 02, 2024 9:13 am
Full Name: Pat
Contact:

Advice on immutable s3 storage with a 'no delete' s3 bucket policy

Post by pat_ren »

I've been doing some research on this and in theory it seems possible, but i haven't found any info from Veeam that backs this up. Hoping I can get some advice.

I was recently involved in helping to restore a client who was breached and the attackers were able to access the Veeam server, and extract the goldmine of info it contains, such as wasabi access keys. The backups were immutable, but the attackers were still able to try and 'delete' the backups, placing delete markers on every object (6mill+ objects) so this took some time to undo before I could assist with the recovery. I have built scripts to help with this, capable of running with many parallel threads to delete objects etc. but initially listing the objects to get versions is the slowest part of this process. I can use daily inventory of the s3 storage to possibly help with this, but it's limited to once a day/week and could be stale when it's needed the most.

I've been considering if there is more that can be done to secure immutable storage and lock things down further. Immutable storage is mandatory now, but it isn't perfect. If the s3 bucket policy used by Veeam to connect to a bucket allows deletes and an attacker can get those keys, the data can be compromised. No data will be lost, but it will slow down recovery.

Veeam's own documentation dictates that s3:deleteObject and s3:deleteObjectVersion are required. This makes sense since Veeam needs to be able to manage objects and retention.
https://helpcenter.veeam.com/docs/backu ... ml?ver=120

I have been looking into if there is a way to configure these policies without allowing any s3:delete permissions, and using another method to expire old objects.

Obviously the best solution to this problem is making sure no bad actors can get to the Veeam server in the first place (new v13 appliance looking interesting for this) - however, assuming a capable advanced threat actor and a perfect storm zero day scenario (as I was recently involved with) - any extra security I can implement would be of value. Cyber-attacks are getting more and more sophisticated, attackers are using more advanced tools (and AI tools) and nothing can be left to chance anymore. They will find a hole if it exists.

Please let me know if anyone has tried anything like this with their access policies and if so, was it successful? Thanks
Mildur
Product Manager
Posts: 10824
Liked: 2949 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Advice on immutable s3 storage with a 'no delete' s3 bucket policy

Post by Mildur » 1 person likes this post

Hi Pat,

As you mentioned, s3:deleteObject and s3:deleteObjectVersion permissions are mandatory; otherwise, your backup server will not be able to delete outdated objects. If the backup server can't clean up old objects, your bucket could become a mess very quickly.

And deleting backup objects outside of the Veeam application is not supported. Even if it were, it would be impossible without accessing the metadata on the backup server to identify which objects need to be deleted.

Best,
Fabian
Product Management Analyst @ Veeam Software
pat_ren
Service Provider
Posts: 133
Liked: 31 times
Joined: Jan 02, 2024 9:13 am
Full Name: Pat
Contact:

Re: Advice on immutable s3 storage with a 'no delete' s3 bucket policy

Post by pat_ren »

Mildur wrote: Sep 09, 2025 6:52 am Hi Pat,

As you mentioned, s3:deleteObject and s3:deleteObjectVersion permissions are mandatory; otherwise, your backup server will not be able to delete outdated objects. If the backup server can't clean up old objects, your bucket could become a mess very quickly.

And deleting backup objects outside of the Veeam application is not supported. Even if it were, it would be impossible without accessing the metadata on the backup server to identify which objects need to be deleted.

Best,
Fabian
Thanks Fabian, I thought that would be the case but never hurts to ask, I'm sure if there was a more secure way to do it, Veeam would share it.

I have been looking at other options for further securing access and found that using aws:SourceIp conditions would be an additional layer of security we can add, on top of everything else that can be done to lock things down.
https://docs.wasabi.com/v1/docs/how-to- ... ip-address?

That could be worth adding to the Veeam KB/suggestions, as in the scenario I described above, attackers were able to extract s3 access keys but did not do anything with them many hours later, and the eventual API delete requests can from a foreign IP. Limiting access to known IPs is one more way to protect the data.
Post Reply

Who is online

Users browsing this forum: No registered users and 4 guests