I am testing the object storage feature with immutability.
I have an SOBR with copy backups going to it.
The SOBR comprises a Windows extent, and Amazon S3 object storage with immutability.
I have 1 VM being backed up to a repo, and copied to the SOBR.
If I attempt to delete the VM backup from the SOBR, it throws an error saying it is immutable until X date.
If I go into the file system (such as an attacker might) and delete the VBK file from the local repository, and the copy backup from the local extent of the SOBR...Of course its gone.
But its still in the object storage; and from there I was able to restore it, which is great.
What I can't seem to do, is fix the fact that it is no longer on the file system locally in the SOBR extent.
The veeam console is not aware the local copy has gone, unless I try specific restore operations from the Disk Copy area of Backups. Even after this, it doesn't flag an error on the Backup Chain.
So I broke the SOBR by removing the object storage from it; and then imported the backup from the object storage.
Now I have the backup chain in Object Storage (Imported), but I can't find any way to copy this to a local repository, there is no copy to or move to performance tier, as there is no longer an SOBR.
Also, this bucket can no longer be used for an SOBR as it has imported objects in it.
So there is a big gap here for me. Some questions for the experts:
1) What is the correct way to recover from an insider or hacker attack where all your local backups have been wiped; but you are able to retain your veeam configuration or otherwise restore the configuration of the SOBR... YOu have the backups in Amaazon S3 object storage, and want to download them back to the capacity tier; as everything was before the attack?
2) I noticed when restoring from object storage; as I had selected the option to encrypt the backups at the point the are offloaded to capacity tier; I needed a password. For some reason this password is not protected by Enterprise Manager, so if it gets lost we cannot restore any backups from object storage. Is there a reason for this or a way around it?
2.a.) The reason I chose to encrypt at this point, is because local backup files are not encrypted so we can use the deduplication feature in Windows Server. Am I correct in thinking that deduplication will not work with a bunch of similar backup chains that are encrypted? Even though the same data is encrypted by the same mechanism it will be randomized and never the same blocks? If this is the case, I will have to encrypt the copy backup jobs and remove the encryption at the object storage level, which just encrypts the bucket itself.
3) I read on this forum that out of the box, the maximum immutability is 90 days. If we want to offload to capacity tier for archival purposes, say 7 years of annual backups, is there anyway to manually set immutability on those specific GFS archives? We would not want VMs to be deleted from them for example, or hackers to be able to delete them from the bucket. But obviously we don't want 7 years of immutability set on all backup chains as it will mean we can't have veeam automatically remove the instantly offloaded current backup files that expire with the retention period.
-
- Expert
- Posts: 109
- Liked: 5 times
- Joined: Apr 21, 2020 11:45 am
- Full Name: Conrad Goodman
- Contact:
-
- Veeam Software
- Posts: 492
- Liked: 175 times
- Joined: Jul 21, 2015 12:38 pm
- Full Name: Dustin Albertson
- Contact:
Re: DR Scenario: Deleted backups from Disk; retained in immutable object storage... How to download.
1. Did you try to rescan the sobr prior to breaking it? Run a rescan and it should find that the local one is gone and you can then import.
2. There currently is no method for recovering this password so make sure you remember it.
2a. dedupe will work on encrypted backups...all be it not as well. One thing to remember is that data is encrypted in movement and that S3 is encrypted by default...but you can also add our encryption for piece of mind.
3. You can set longer via powershell. But make sure that this is something you plan out and think about very wisely. Can you imagine making a mistake and then having backups that you cant delete for 7 years of charges.
2. There currently is no method for recovering this password so make sure you remember it.
2a. dedupe will work on encrypted backups...all be it not as well. One thing to remember is that data is encrypted in movement and that S3 is encrypted by default...but you can also add our encryption for piece of mind.
3. You can set longer via powershell. But make sure that this is something you plan out and think about very wisely. Can you imagine making a mistake and then having backups that you cant delete for 7 years of charges.
Dustin Albertson | Director of Product Management - Cloud & Applications | Veeam Product Management, Alliances
-
- Expert
- Posts: 109
- Liked: 5 times
- Joined: Apr 21, 2020 11:45 am
- Full Name: Conrad Goodman
- Contact:
Re: DR Scenario: Deleted backups from Disk; retained in immutable object storage... How to download.
1. I don't believe so, but its fully broken now so I will rebuild it and test again.
2. In which case I may choose not to use it at that point; and encrypt the copy backup which is password recovery protected by enterprise manager.
3. Yeah won't rush into something like that...
2. In which case I may choose not to use it at that point; and encrypt the copy backup which is password recovery protected by enterprise manager.
3. Yeah won't rush into something like that...
-
- Enthusiast
- Posts: 57
- Liked: 4 times
- Joined: Jan 21, 2019 1:38 pm
- Full Name: Dariusz Tyka
- Contact:
Re: DR Scenario: Deleted backups from Disk; retained in immutable object storage... How to download.
Hi @ConradGoodman,
ad1. I tried this scenario - so I deleted whole folder from performance tier for my test backup (including the indexes). Then I made a rescan of SOBR and all indexes were redownloaded from capacity tier. Then I was also able to make a test restore directly from S3 with no local data. It was slower as all data was pulled directly from S3 but it worked. Next time i started this test job it failed as it was trying to do an incremental one. But when I selected an active full it was ok and was also offloaded to S3 later on.
Dariusz
ad1. I tried this scenario - so I deleted whole folder from performance tier for my test backup (including the indexes). Then I made a rescan of SOBR and all indexes were redownloaded from capacity tier. Then I was also able to make a test restore directly from S3 with no local data. It was slower as all data was pulled directly from S3 but it worked. Next time i started this test job it failed as it was trying to do an incremental one. But when I selected an active full it was ok and was also offloaded to S3 later on.
Dariusz
Who is online
Users browsing this forum: No registered users and 8 guests