Comprehensive data protection for all workloads
Post Reply
jay262
Novice
Posts: 5
Liked: never
Joined: Jul 26, 2023 5:20 pm
Full Name: Jay Summers
Contact:

Immutability & Direct Object Storage Backups

Post by jay262 »

Hi all,

We were running Veeam 11 but have upgraded to Veeam 12 to take advantage of direct backups to object storage. The direction our company is going is to minimize the on-prem presense as much as possible. As a result, all local storage will be decommissioned and we will be leveraging S3 object storage (backing up directly to Wasabi).

One of the new mandates also calls for all cloud repositories (i.e., buckets) to leverage immutability. While this is pretty straighforward, one of the challenges is that management wants different immutability periods depending on the backup type. For example, they want daily backups to be immutable for 14 days, weekly backups for 4 weeks and monthly backups for 6 months. Since immutability is set at the bucket level, I don't see a way to accomplish this without creating multiple buckets. One for each immutability period. I assume that this means that GFS goes out the window and will instead have to create a new backup job for each immutability period and point it to its corresponding bucket. I assume the the way this could be accomplished is to use Backup Copy, however, backup copies would not really solve the problem since you would still end up with multiple backup copies each going to its corresponding bucket.

I am hoping I am wrong and there is a simple, yet straightforward way to accomplish this. It would be nice if there was a way to have GFS backups go to different repositories as part of the same backup job, but as far as I know, that is not possible.

Any feedback is much appreciated!

Jay
Gostev
Chief Product Officer
Posts: 31364
Liked: 6604 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Immutability & Direct Object Storage Backups

Post by Gostev »

Hello,

I would advise strongly against using cloud object storage as your primary and only backup repository.

To name just a few reasons why:
- No compliance with the 3-2-1 rule of backup: single backup, no additional copy. This sets you up for a recovery failure, the only question is when.
- Backup performance over the Internet and associated consequences e.g. how long snapshots will stay open impacting your production VMs.
- Restore performance over the Internet: will be sufficient to meet your RTO. Did you test performance of all restore types you're intended to use?
- Disaster recovery: same idea as previous but now you have to do *mass* restore over the Internet. Can your business accept being down for weeks?

Answering the main question. First and foremost, you should never set or manage immutability at the Wasabi bucket level, if that is what you meant. You need to let Veeam manage immutability. Now, the repository-level immutability setting in Veeam is actually for recent backups only (those you called daily). While GFS backups (weekly, monthly) are made immutable for the duration of their retention policy irrespective of that value. You will see the corresponding note in the immutability settings of the object storage repository wizard.

So, just make sure you set the GFS retention policy in the backup job according to the requirements from your management, and immutability will follow.

Thanks and please, for your own sake, do not decommission that local storage :)
jay262
Novice
Posts: 5
Liked: never
Joined: Jul 26, 2023 5:20 pm
Full Name: Jay Summers
Contact:

Re: Immutability & Direct Object Storage Backups

Post by jay262 »

Gostev,

Thanks so much for the reply, very informative! One thing I left out is that the on-prem presense is small, around 8-10 VMs with about 99% of the restores being individual files. This plan to decomission local storage started before I go there. In fact, the VBR server is running in Azure and currently using SOBR made up of a performance tier and a capacity tier. So technically, we've already been doing all of our restores from the "internet" per se. We we now want to do is to further reduce costs by getting rid of the azure performance tier and simply backing up directly to object storage. The idea is to have Azure VMs back up directly to blob storage and on-prem VMs backing up directly to Wasabi(using local proxies).

The question I had regarding bucket immuability may not have been framed correctly. I didn't mean to say that we're setting and immutability period from the Wasabi side. The bucket in wasabi simply has versioning and object locking enabled. My question was more around the immutability period set when you configure the repository from the Veeam side "make immutable for X number of days". I did not know if that setting affected GFS as well, but it is clear now how it works.

Since the VBR server runs in Azure, in the event of a catastrophic failure (on-prem) the idea would be to restore the on-prem VMs in Azure or to the other datacenter (currently have 2 datacenters with about 4-5 VMs each).

One of the reasons we went with Wasabi was to avoid API and egress charges. Since we have local proxies at each data center, the assumption is that these proxies would be the ones sending the data over to Wasabi. So the backup data would never go to Azure first then off to Wasabi. However, when it comes to mount servers, documentation says that when "using cloud repositories, the VBR server itself is the mount server" makes it sound as if you don't have a choice. So my question is, if the Veeam server sits in Azure and it is the mount server, and the backup data sits in Wasabi, would that result in egress charges? Repository mounted to VRB server, then restore data sent from VBR server to on-prem. Or am I totally wrong with my data path assumptions?

Thanks for all the feedback! Much appreciated

-Jay
Gostev
Chief Product Officer
Posts: 31364
Liked: 6604 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Immutability & Direct Object Storage Backups

Post by Gostev »

Got it. In that case, my biggest concern would be a single backup with no copy. You do have backup copies with your current in-Azure design as your Performance Tier and Capacity Tier use different medias. Not perfect because they are both with the same vendor, but this is still much better level of 3-2-1 compliance than having a single backup.

Please note that your previos experience of performing restores from in-Azure SOBR might not be too relevant as SOBR restores are intelligent in a sense that the restore process will not download from Capacity Tier the data blocks that exist in Performance Tier. This speeds up restores because being block storage, Performance Tier is usually a few times faster than object storage (in your case, by how much faster depends on the Azure VM instance type you're using for your Performance Tier). This is another reason to test the restore performance directly from object storage and ensure it is satisfactory (meets SLAs of your business).

Egress depends on the DR scenario. If you use VBR in Azure to restore from Wasabi into a 3rd data center, then all restore traffic will loop through Azure causing Azure egress charges. Because of that, in case of such DR scenario you will want to install VBR into the 3rd data center and import your Wasabi backups there. But both of these operations are super simple anyway.

Thanks!
Post Reply

Who is online

Users browsing this forum: Baidu [Spider], Bing [Bot], Google [Bot], Semrush [Bot] and 40 guests