Discussions related to using object storage as a backup target.
Post Reply
vaxyz
Enthusiast
Posts: 40
Liked: 1 time
Joined: Feb 04, 2023 5:59 pm
Contact:

Minimum required data storage duration for AWS Capacity and Archive Tier

Post by vaxyz »

Hi,

I recently setup a SOBR in my environment using AWS S3 Standard and AWS Glacier Deep Archive as my Object Storage.

My question is regarding the "Archive backups only if the remaining retention time is above minimal storage period". I have this checked.

If I set "Archive GFS backups older than 7 days" will my data be moved from the Capacity Tier (AWS S3 Standard) to the Archive Tier (Glacier)?
Mildur
Product Manager
Posts: 8706
Liked: 2284 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by Mildur » 1 person likes this post

Hello Vaxyz

Welcome to the forum.
With that setting, GFS restore points with a retention longer than 180 days can be moved. In your case, only monthly and yearly backups will be moved. Your weekly backups only have a retention of 4 weeks, which is lower than the minimum required retention time of AWS Glacier Deep Archive (180 days). Your Weekly GFS restore points and incremental backups will stay on capacity tier.

Best,
Fabian
Product Management Analyst @ Veeam Software
vaxyz
Enthusiast
Posts: 40
Liked: 1 time
Joined: Feb 04, 2023 5:59 pm
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by vaxyz »

My Backup Job is set to the following:

Retention policy: 60 days

GFS:
4 weeks
12 months
3 years

If I uncheck "Archive backups only if the remaining retention time is above minimal storage period", will I be penalize by AWS? If yes, I thought there is no minimum storage duration for S3 Standard.
Mildur
Product Manager
Posts: 8706
Liked: 2284 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by Mildur » 1 person likes this post

If yes, I thought there is no minimum storage duration for S3 Standard.
But there is one on AWS Glacier storage classes.

An example for gfs weekly retention of 4 weeks.
Each weekly backup will be kept for 4 weeks. 4 weeks are 28 days.

Now you move the weekly gfs restore point to AWS Glacier Deep Archive, where you have a minimum storage policy of 180 days. But the configuration is saying please delete this backup after 28 days. 152 days before you are allowed to delete it.
Amazon will definitely bill you a early deletion fee for that :)
https://aws.amazon.com/premiumsupport/k ... lete-fees/

Best,
Fabian
Product Management Analyst @ Veeam Software
vaxyz
Enthusiast
Posts: 40
Liked: 1 time
Joined: Feb 04, 2023 5:59 pm
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by vaxyz »

I think I start understanding how this works now. GFS is just for AWS Glacier only - Correct?

To avoid early deletion fees, I should not keep Weekly Full backups to Glacier, but keep 31 days' worth of data in my Capacity Tier (S3 Standard)

Does this look correct to you?

Backup Job is set to the following:

GFS:
4 weeks - Uncheck
12 months - Checked
3 years - Checked

Archive Tier:
"Archive GFS backups older than 31 days"

"Archive Backup only if the remaining retention time is above minimal storage period" - Uncheck
vaxyz
Enthusiast
Posts: 40
Liked: 1 time
Joined: Feb 04, 2023 5:59 pm
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by vaxyz »

Here are screenshots.

Not sure it's the best idea for me to Uncheck weekly backups. I would think I need weekly gfs retention if I want to flag them as a gfs restore point.

Image

Image

Case# 05885797
Mildur
Product Manager
Posts: 8706
Liked: 2284 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by Mildur » 1 person likes this post

Hello vaxyz
I think I start understanding how this works now. GFS is just for AWS Glacier only - Correct?
Archive tier is limited to GFS tagged backups (a few other types of backups are possible). It wouldn't make sense to offload daily backups which are deleted a few days later. The early deletion penalty would be to costly.
To avoid early deletion fees, I should not keep Weekly Full backups to Glacier, but keep 31 days' worth of data in my Capacity Tier (S3 Standard)
Correct. It's cheaper to keep weekly GFS backups in the capacity tier when they have retention of only a few weeks.
"Archive Backup only if the remaining retention time is above minimal storage period" - Uncheck
Leave it checked. This way you make sure that no backups with shorter retention will be offloaded.

You can leave weekly GFS enabled. Just as a note, with 30 or 60 days short term retention, this "weekly GFS" wouldn't be deleted before the short term retention is over.

Weekly GFS restore points won't use the space of an entire full backup in the capacity tier.
In Object Storage, we don't have "backup files". We split all restore points in 1MB blocks (default) and upload only unique blocks to the capacity tier. A weekly GFS restore point is just a collection of already offloaded unchanged objects/blocks from previous restore points and the incremental backup from the particular weekly GFS day (sunday in your case).

Best,
Fabian
Product Management Analyst @ Veeam Software
vaxyz
Enthusiast
Posts: 40
Liked: 1 time
Joined: Feb 04, 2023 5:59 pm
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by vaxyz »

Hi Fabian,

Thank you for the clarification.

From my last post, I talked to Exagrid support and Todd (Veeam support) to come up with a solution that can fit my retention needs.

I would like to get your thoughts on this.

Objective for on-prem retention:

Backup Job with a Exagrid backup Repository to keep 1 year worth of my data on the Exagrid with a combination of weeks, month, year.

New Backup Repository and edit Backup job:

1. Create a new Exagrid SHARE folder and add a new Backup Repository using that SHARE folder.
2. Edit the existing Backup job (SRV-WIKI) and change the Storage from SOBR to the new Backup Repository
3. Retention policy 31 days
4. GFS - 4 weeks - 12 months - 3 years
5. Check "Create synthetic full backup periodically" on Saturday
6. Check "Perform backup files health check (detect and auto-heals corruption)" once a month on Saturday
7. Compression level: Dedupe-friendly

Image

Objective for Cloud Retention Object-Storage (AWS):

2 weekly (Capacity Tier) and 36 months (Archive Tier)

1. Create Backup Copy job
2. Object to process (SRV-WIKI)
3. Select SOBR for the Backup Repository
4. Retention policy: 7 days
GFS:
2 weekly, 36 months

Image


SOBR:

Image

Image
Mildur
Product Manager
Posts: 8706
Liked: 2284 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by Mildur » 1 person likes this post

In my understanding, you now have:
- Backup Job points to simple backup repository with Exagrid
- Backup Copy Job points to SOBR with Exagrid, AWS S3 and AWS S3 Glacier
That sounds ok for me.

One thing to add, in case you didn't knew:
If you use Veeam Backup & Replication v12, you could also do direct backup copy to object storage for the backup copy job.
Instead of an Exagrid Share you will use your AWS S3 buckets as the performance tier. Capacity Tier isn't required with this configuration. From the AWS S3 buckets in the performance tier, GFS restore points can be moved directly to your Archive Tier with the 7 days policy.

Best,
Fabian
Product Management Analyst @ Veeam Software
vaxyz
Enthusiast
Posts: 40
Liked: 1 time
Joined: Feb 04, 2023 5:59 pm
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by vaxyz »

Hi Fabian,

That's interesting. I did not know that.

Taking your suggestion, I created another AWS S3 bucket, add a new SOBR, and a Backup Copy job.
My goal is to keep 2 recent weeks full backups in AWS S3 bucket. And 36 months of full backups in the Archive Tier. Do these screenshots looks what I'm trying to accomplish?

Also, should I check "Read the entire restore point from source backup instead of synthesizing it from increments"?

Image

Image

By checking "Make backups immutable for the entire duration of their retention policy" is that going to follow my GFS? See first screenshot above.

There should be no minimum duration period for AWS S3 Standard - only for the Archive Tier. Is that correct? I would like to keep my cost down.
Mildur
Product Manager
Posts: 8706
Liked: 2284 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by Mildur »

My goal is to keep 2 recent weeks full backups in AWS S3 bucket. And 36 months of full backups in the Archive Tier. Do these screenshots looks what I'm trying to accomplish?
Yes, sounds good. You will see 2-3 weeks of backups with this setting. But remember, only unique blocks are on AWS S3. You will only use S3 storage of approximately 1-2 full backups with this retention setting.
Also, should I check "Read the entire restore point from source backup instead of synthesizing it from increments"?
This option enables Active Full copies.
If your performance tier is a AWS S3 bucket, please don't enable that option. It will lead to full backups require it's entire size in object storage (AWS S3).
By checking "Make backups immutable for the entire duration of their retention policy" is that going to follow my GFS? See first screenshot above.
Correct. Each of your monthly backup will be immutable for 36 months in Archive Tier. As long you pay your bills. If not, AWS can always delete your Tenant and your data :)
Product Management Analyst @ Veeam Software
vaxyz
Enthusiast
Posts: 40
Liked: 1 time
Joined: Feb 04, 2023 5:59 pm
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by vaxyz »

Yes, sounds good. You will see 2-3 weeks of backups with this setting. But remember, only unique blocks are on AWS S3. You will only use S3 storage of approximately 1-2 full backups with this retention setting.
For clarification, can I restore an individual file from one of the weekly full backups located in the Performance Tier (AWS Standard S3 Standard)? Or do I need to download the entire weekly full back up just to restore one file?
Mildur
Product Manager
Posts: 8706
Liked: 2284 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by Mildur »

Yes, that's possible.
You can restore single Guest OS Files or application items from a restore point on object storage without first downloading the entire restore point.
You can even start a Instant Recovery Session from object storage to your hypervisor.

Best,
Fabian
Product Management Analyst @ Veeam Software
vaxyz
Enthusiast
Posts: 40
Liked: 1 time
Joined: Feb 04, 2023 5:59 pm
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by vaxyz »

And I should leave this uncheck, so I can have my data to be offloaded from Performace Tier (AWS S3) to Archive Tier (AWS Glacier). Is that correct?

Image
Mildur
Product Manager
Posts: 8706
Liked: 2284 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by Mildur »

Please leave it checked.
If you uncheck it, your weekly backups also will be moved after 7 days. You don't want that. It will lead to early deletion costs on archive tier.

Best,
Fabian
Product Management Analyst @ Veeam Software
vaxyz
Enthusiast
Posts: 40
Liked: 1 time
Joined: Feb 04, 2023 5:59 pm
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by vaxyz »

Thank you so much!!

I think I got everything squared away - with your tremendous help.

What "Copy Mode" would you recommend for me to select?

Image
Mildur
Product Manager
Posts: 8706
Liked: 2284 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by Mildur »

You're welcome :)

I would use immediate copy.
With immediate copy mode, every restore point gets copied. Periodic Mode only copies the last restore point when the job starts. In V12 you can switch between both modes if one doesn't work for you and you want to try the other one.

Best,
Fabian
Product Management Analyst @ Veeam Software
vaxyz
Enthusiast
Posts: 40
Liked: 1 time
Joined: Feb 04, 2023 5:59 pm
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by vaxyz »

Mildur wrote: Feb 24, 2023 3:10 pm Please leave it checked.
If you uncheck it, your weekly backups also will be moved after 7 days. You don't want that. It will lead to early deletion costs on archive tier.
Do avoid early deletion fees for my weeklys, what about me increasing to 14 days? See screenshots below.

And I checked "Store archived backups as standalone full". Do you see any issues when restoring individual files?

Image

Image
vaxyz
Enthusiast
Posts: 40
Liked: 1 time
Joined: Feb 04, 2023 5:59 pm
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by vaxyz »

Here is my revised setup.

Keep 1 weekly full in the S3 Standard "Retention policy 7 days".

Keep 36 monthly standalone fulls in the Archive Tier

My questions are:

1. In my Backup Copy jon, should I select "Periodic copy (pruning)" because my bandwidth speed is only 250/250 Mbps?

Image

2. In my GFS settings, do I need to select "Keep weekly full backups", if I'm using 36 months standalone Fulls? If you say I need "Weekly Fulls", How many weeks would you recommend?

3. How many days I should input for "Retention policy"?

Image

4. If I leave this unchecked "Archive backups only if the remaining retention time is able minimal storage period", will I still be penalized for early deletion cost - even though I'm not offloading weekly fulls?

5. Should I leave it for 7 days "Archive GFS backups older than" - even though I'm not offloading weekly fulls?

Image
Mildur
Product Manager
Posts: 8706
Liked: 2284 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by Mildur » 1 person likes this post

I'm now confused of the new changes :) What's your goal? Keep it cheap as possible?
Your new backup settings looks like you want to move everything to Archive as soon as possible.
What happens if you loose every restore point on your Exagrid? Getting access to the restore points in AWS S3 Deep Glacier takes time and will cost you money.

I would have kept your previous backup copy job (Short Term: 14 days, Long Term GFS: 2 weekly, 36 months).
And leave the option Archive backups only if the remaining retention time is able minimal storage period checked. It should be ok to keep 2 weeks on AWS S3 for faster restore times without having additional retrieval costs. You won't save money on AWS S3 by moving backups after 7 days.
In my Backup Copy jon, should I select "Periodic copy (pruning)" because my bandwidth speed is only 250/250 Mbps?
Only you can calculate the required bandwidth to upload all incremental restore points. I cannot tell you if
it's enough or not.
2. In my GFS settings, do I need to select "Keep weekly full backups", if I'm using 36 months standalone Fulls? If you say I need "Weekly Fulls", How many weeks would you recommend?
3. How many days I should input for "Retention policy"?
Your decision. Check your business requirements and decide how long you need to retain backups.

Best,
Fabian
Product Management Analyst @ Veeam Software
vaxyz
Enthusiast
Posts: 40
Liked: 1 time
Joined: Feb 04, 2023 5:59 pm
Contact:

Re: Minimum required data storage duration for AWS Capacity and Archive Tier

Post by vaxyz »

I'm now confused of the new changes :) What's your goal? Keep it cheap as possible?
That was my goal.

What happens if you loose every restore point on your Exagrid? Getting access to the restore points in AWS S3 Deep Glacier takes time and will cost you money.
That is a good point.

I would have kept your previous backup copy job (Short Term: 14 days, Long Term GFS: 2 weekly, 36 months).
And leave the option Archive backups only if the remaining retention time is able minimal storage period checked. It should be ok to keep 2 weeks on AWS S3 for faster restore times without having additional retrieval costs.
I will take your recommendation and apply it to my Backup Copy job

Only you can calculate the required bandwidth to upload all incremental restore points. I cannot tell you if
it's enough or not.
My Backup Copy job runs once a day in late night and it's the last job to run. I will leave it as "Periodic copy".


When do you expect I will see my data be transferred from the Capacity Tier over to the Archive Tier?
Post Reply

Who is online

Users browsing this forum: No registered users and 5 guests