Discussions related to using object storage as a backup target.
Post Reply
ConradGoodman
Expert
Posts: 109
Liked: 5 times
Joined: Apr 21, 2020 11:45 am
Full Name: Conrad Goodman
Contact:

Using SOBR with Capacity Tier to store Copy Backup GFS, move only quarterly/yearly to cloud.

Post by ConradGoodman »

I have a reasonably large Windows Agent SQL backup job (4TB and growing), that is currently configured as follows:

Data Centre 1 (DC1)
Veeam B+R Server (50TB)
SQL Server Backup Job, contains backups of SQL_1 + SQL_2 + SQL3 (3 Windows agents).
Job creates active fulls on a weekly basis, and keeps 7 restore points currently.

Data Centre 2 (DC2)
Veeam B+R Server (15TB, can be expanded with a disk shelf)
Various jobs.

I need to create a Copy Backup Job for SQL Server Backup Job, that creates monthly, quarterly and yearly archive (GFS backups):

Monthly: 3
Quarterly: 4
Yearly: 7

I would like to create a SOBR with capacity tier at DC2, which offloads specific backup archives only.

As the job is fairly large at 4TB and growing, cloud object storage will be fairly expensive on a monthly basis.

Therefore I only want objects stored in the cloud for off-site archival purposes. We would be looking to retain up to 7 years of data.

Would it be possible to move only quarterly and yearly OR only yearly recovery points to object storage?

From what I've read, capacity tier would move ONLY the GFS archives from the Copy Backup Job, based on how old they are.

I believe in this scenario, If I set the 'Operational Restore Window' of the capacity tier to 365 days, the Yearly GFS archive would get moved to the cloud every 365 days.

Or if I set it to 84 days, it would offload the quarterlies once they are 1 quarter old.

Lacking in my understanding, is the overlap.

For example when this backup copy job is first run, it would create a full backup, would this immediately be marked as a GFS yearly AND a GFS quarterly AND a GFS monthly?

Or does it just mark it as a GFS yearly, then not create a monthly until 4 weeks later?

At the point it creates a quarterly, does this point skip the monthly because the coincide?

For example this is what I want the end result to look like:

Year 1
1/1: GFS Yearly - CLOUD
1/2: GFS Monthly 1 - LOCAL
1/3: GFS Monthly 2 - LOCAL
1/4: GFS Quarterly 1 - CLOUD
1/5: GFS Monthly 1 - LOCAL
1/6: GFS Monthly 1 - LOCAL
1/7: GFS Quarterly 2 - CLOUD
1/8: GFS Monthly 1 - LOCAL
1/9: GFS Monthly 2 - LOCAL
1/10: GFS Quarterly 3 - CLOUD
1/11: GFS Monthly 1 - LOCAL
1/12: GFS Monthly 2 - LOCAL
Year 2
1/1: GFS Yearly 2
Year 3:
1/1: GFS Yearly 3
------------------
Year 7: GFS Yearly 7
END

So in year 1, we would end up with 4 restore points in the cloud, growing by 1 restore point per year after that.

That would be around 16TB in the cloud, growing by around 4TB a year.

Is there a cheaper way to do this, or would the other options (veeam cloud connect, or private veeam server in private cloud, cost a lot more?).


I believe Glacier storage can also be harnessed but only with a virtual tape library?
ConradGoodman
Expert
Posts: 109
Liked: 5 times
Joined: Apr 21, 2020 11:45 am
Full Name: Conrad Goodman
Contact:

Re: Using SOBR with Capacity Tier to store Copy Backup GFS, move only quarterly/yearly to cloud.

Post by ConradGoodman »

I have one further question about all of this.

It would make sense to have immutability set for 7 years, so all the yearly archives cannot be deleted until the retention period is up in the backup copy job.

But from my understanding, this would interfere with the automatic deltetion of the expiring quarterly backups, as the immutability is set for the entire tier not on specific objects.

I guess we could get around this in Amazon by manually marking yearlies as immutable? Ugly solution though.
jmmarton
Veeam Software
Posts: 2097
Liked: 310 times
Joined: Nov 17, 2015 2:38 am
Full Name: Joe Marton
Location: Chicago, IL
Contact:

Re: Using SOBR with Capacity Tier to store Copy Backup GFS, move only quarterly/yearly to cloud.

Post by jmmarton »

There are a few things to consider here. First, you can think of how we write to object storage being similar to how we leverage ReFS. This means that as those GFS points are written to object storage, if blocks have already been written as objects, we can reference those objects in new restore points. So if you have a yearly 4 TB backup going to the cloud this doesn't mean 16 TB will be used after 4 years. It likely will be something less than that, though how much less depends on just how the data changes year over year. The more unique blocks, the more object storage that's used.

From an immutability perspective, the UI currently only allows for a max of 90 days though it's possible to increase this further using PowerShell. But that's not the main intent of today's immutability feature. It's meant to make sure that it's not possible to delete recent backups so that if everything is lost on-premises, you still have recent backups in the cloud which can't be deleted.

Ultimately if you want to do immutability for yearlies for 7 years but not enable immutability for other GFS points, you may want to consider having 2 different SOBRs. You could enable GFS in the primary job for just yearly retention and target a SOBR that's using immutable S3 storage. Then you can have a Backup Copy Job copy backups to a different SOBR with monthly/weekly retention which also copies to S3 but this time without immutability enabled.

Joe
Post Reply

Who is online

Users browsing this forum: No registered users and 15 guests