Discussions related to using object storage as a backup target.
Post Reply
jcofin13
Service Provider
Posts: 66
Liked: 1 time
Joined: Feb 01, 2016 10:09 pm
Contact:

Cloud Capacity Teir in SOBR

Post by jcofin13 »

I want to see if i have this Performance/Capacity Tier stuff straight.

If i have a AWS capacity Tier setup and i have both COPY immeidately and also set my operational restore window to 32 for Moving to capacity tier.

Job settings:
14 daily - full + incs with synth full every Saturday (14 day immutable)
4 weekly
12 monthly
3 yearly

This means that i will always have 14 daily onsite local..........3 - 4 weekly local oniste.......and 1 monthly onsite local to pull backups from. All other blocks for older backups will be picked up and put in AWS storage. Is this correct thinking as my operational restore window (move setting) is set to 32?

The backup items getting move to aws is transparent but in a restore scenario if i have to restore from 2 months ago that likely means that all that data for that restore is comming from AWS and likely slower that local (and with cost). Thats fine. The goal i have is to free up space for our on prem repos and not save so much data....especially old data locally that we probably wont need anyway.

Second question:
If i set this number too high to start.....say i set to 60 days and then the we decide we want it at 32 days is veeam smart enough to move those old blocks back locally to the performance tier automatically or is that an import job of some sort?
Mildur
Product Manager
Posts: 8735
Liked: 2296 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Cloud Capacity Teir in SOBR

Post by Mildur »

This means that i will always have 14 daily onsite local..........3 - 4 weekly local oniste.......and 1 monthly onsite local to pull backups from. All other blocks for older backups will be picked up and put in AWS storage. Is this correct thinking as my operational restore window (move setting) is set to 32?
Correct, but because you have enabled copy and move together, the old blocks are already copied to the capacity tier. They 32 days old restore points will only need to be deleted from the performance tier. If you set 32 days, you will get restore points between 32 and 35 on the performance tier. With Forward Incremental retention you always have some additional restore points.
The same goes for the 14 daily backups on the performance tier. You will get between 14-21 days of backups.
The backup items getting move to aws is transparent but in a restore scenario if i have to restore from 2 months ago that likely means that all that data for that restore is comming from AWS and likely slower that local (and with cost). Thats fine.
Veeam is smart. If you do a restore from the capacity tier, it only takes blocks from there which are not already present on the performance tier.
If you want to restore a single vm, chances are good, the most of the blocks are also stored on the performance tier from this vm.

If i set this number too high to start.....say i set to 60 days and then the we decide we want it at 32 days is veeam smart enough to move those old blocks back locally to the performance tier automatically or is that an import job of some sort?
You can download it by yourself.
Product Management Analyst @ Veeam Software
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Cloud Capacity Teir in SOBR

Post by HannesK » 1 person likes this post

just on the last one... if you decrease the "move" days, that means more data is stored in AWS only. so nothing needs to be moved back locally, it's the other way round: the unneeded blocks are deleted on-prem. For the other way round (increasing the "move" days), Fabian gave the answer.
jcofin13
Service Provider
Posts: 66
Liked: 1 time
Joined: Feb 01, 2016 10:09 pm
Contact:

Re: Cloud Capacity Teir in SOBR

Post by jcofin13 »

That all makes sense but to be clear is that the same with GFS data as well?
In my above example:
14 daily - full + incs with synth full every Saturday (14 day immutable)
4 weekly ->set under Keep certain full backups longer for archival purposes(GFS settings)
12 monthly ->set under Keep certain full backups longer for archival purposes (GFS settings)
3 yearly ->set under Keep certain full backups longer for archival purposes (GFS settings)

Set move to 32 days

If i set the copy AND move setting it seems that the answer is yes.......it wont recopy or move anything since it already copied it and it will delete anything locally that is a full backup chain older than 32 days thus freeing up all that space.

I just got off the phone with support and the tech i talked to thought it worked like this:
None of the 4 weeklys would be removed locally until the entire 4 weeks has past because setting gfs to 4 weeks is its own active chain until the 4 weeks is up and it cant move it until then.

Same with monthlys. in older to free up the monthlys all 12 months would need to complete locally before it could be "moved" to capacity tier due to it being its own active chain for 12 months and "Move" can only move inactive chains of data.
etc.

Is this correct? That seems odd. I run synthetic fulls each week so i would then think that backup would be good to be "moved" from local after 32 days . Thats maybe a bad example as there are 4 weeks in a month :). But the monthlys .......it has to wait the the full 12 months before it can move that entire years worth off to aws?

My situation is i want to free up local repo storage as i am quickly running out. I was figuring that if i just enabled "move" along with "copy as soon as" that it would copy all inactive chains....meaning anything that has had a synthec full (or active full) on it it that is outside of the operational restore window.
Mildur
Product Manager
Posts: 8735
Liked: 2296 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Cloud Capacity Teir in SOBR

Post by Mildur »

That‘s not correct what the tech is telling you.

A gfs restore point is always sealed after 7 days when using weekly synthetic fulls.
Monthly gfs restore points are not 1 vbk and each month is a vib. All of them are vbks (fullbackups). So there are definitely sealed.

You will have 4 weeklies gfs on the performance tier, that‘s correct. Because the last 4 weeklies are in the operational window or 32 days.

For the monthlies, you will have always 1 monthly on the performance tier, because in the operational window of 32 days, there will be always the first weekend of a month.

If the scheduled day of a weekly and monthly is overlapping, you will have only 1 vbk on that day. For example on Jan 1st 2022. This gfs restore point should be tagged all three: weekly, monthly, yearly (depending on your configuration).
Product Management Analyst @ Veeam Software
jcofin13
Service Provider
Posts: 66
Liked: 1 time
Joined: Feb 01, 2016 10:09 pm
Contact:

Re: Cloud Capacity Teir in SOBR

Post by jcofin13 »

Thank you for the clarification. This makes sense and was how i thought it worked.
With that in mind, setting things to 32 then would free up local storage (performance tier) as anything older is only stored in the capacity tier.

Lets go one more layer. Same scenario but now we have a local immutable repo with local settings of 14 days. The capacity tier in aws is set for longer immutability....but I'm not sure if it matters with regard to the performance tier in any way being able to remove older items that age out of the operational restore window.

Now then if the immutability set to 14 and is set to keep 14 days immutable it should be fine as anything past this should fall off of retention. I suppose i could set retention to 15 on the job just to be sure its able to delete the oldest point and not hit the immutable flag.
The real question if GFS is enabled on a job and immutability is enabled on the local repo ........the setting for immutability says "GFS full backups are made immutable for the entire duration of the retention policy".

I assume the retention policy is your GFS settings.
Thus if you have your retention policy set as stated above for the GFS settings on your job and you have it set to keep 12 monthly's, it has to then keep all 12 of those monthly's locally on the performance tier since that is what GFS is set to. Basically it ignores your immutable "days" setting for the GFS items and sets immutability on GFS for 12 months ??

Thus.....enabling the move option to free up local space may not be able to actually free up any space at all since immutability on GFS will prevent it

Is that correct in that situation? Perhaps i am over thinking this but the setting does say "GFS full backups are made immutable for the entire duration of their retention policy"
Mildur
Product Manager
Posts: 8735
Liked: 2296 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Cloud Capacity Teir in SOBR

Post by Mildur »

Thus.....enabling the move option to free up local space may not be able to actually free up any space at all since immutability on GFS will prevent it

Is that correct in that situation? Perhaps i am over thinking this but the setting does say "GFS full backups are made immutable for the entire duration of their retention policy
If Move Policy is enabled, veeam will made this gfs restore points only immutable for the period specified in the settings of the hardened repo. 32 days in your first case, if I remember correctly from your first post. Your GFS Restore Points should are already be only protected for 32 days, because you have enabled the move policy the entire time.


For the second scenario.
If you hadn‘t enabled the move policy, only new gfs restore points would used the setting specified from the hardened repo as the immutable period. Or it would be a nice security hole for attackers.
If you have already a linux hardened repo and gfs restore points without a capacity tier and the move policy enabled, after enabling the move policy, veeam cannot adjust the new immutability date to a earlier date. The linux filesystem would prevent that.

But if you are using a hardened repo with FASTClone support, space savings should not be that high with offloading gfs restore points to the s3 storage. Because normally most of the blocks of the moved gfs restore point should be used by other gfs restore points and can not be deleted. How much blocks are duplicated between the gfs restore points depends on your change rate.

From the guide:
[For capacity tier with enabled move policy] Veeam Backup & Replication ignores the GFS retention policy. The immutability time period for full backup files equals the period specified in the setting of a hardened repository.
Product Management Analyst @ Veeam Software
jcofin13
Service Provider
Posts: 66
Liked: 1 time
Joined: Feb 01, 2016 10:09 pm
Contact:

Re: Cloud Capacity Teir in SOBR

Post by jcofin13 »

Thank you for the reply. I enabled the move option on our jobs this past weekend. It did free up some space but im a little confused now. I set it to 32 in hopes that after the synthetic fulls would kick off this weekend and the offloading would be enforced on all backup jobs that fall outside 32 days. I cant seem to sort out how to see or tell what is now on the capacity tier directly vs what is on the performance tier. Seems like a dumb question but if i go to do a restore of any vm per the "Files" area and go to either "disk" or "object" storage to select a machine to restore....all the "points" show data on both the performance and capacity tier no matter if they are 1 day old or 70 days old. Is there a way to see what is located on just capacity tier for restore vs what is stored locally on the performance tier?
Mildur
Product Manager
Posts: 8735
Liked: 2296 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Cloud Capacity Teir in SOBR

Post by Mildur »

Yes, you should see it with the help of the icon in the backup properties:

Backup State Indicators
Product Management Analyst @ Veeam Software
jcofin13
Service Provider
Posts: 66
Liked: 1 time
Joined: Feb 01, 2016 10:09 pm
Contact:

Re: Cloud Capacity Teir in SOBR

Post by jcofin13 »

Thank you. Those state indicators help! Oddly if i go look at a particular vm it shows it has full vbks for older weeklys and monthlys with icon "Full restore point; on performance tier and offloaded to capacity tier"

I would think that if my operational restore window on the sobr is 32 days that it should be on the capacity tier only. Not sure why it is keeping it local on the performance tier.
Mildur
Product Manager
Posts: 8735
Liked: 2296 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Cloud Capacity Teir in SOBR

Post by Mildur »

You have enabled move policy today.

So if they were on a linux hardened repo (performance tier) without a move policy enabled before, then it is expected behavior. They are protected for their entire retention period and cannot be removed from performance tier yet.

I think, I understand now, what veeam support was telling you. Enabling move policy on a existing hardened repo to offload gfs restore points after specified operational window will only work for gfs restore points after enabling the move policy.

Each of the gfs restore points created before enabling the move policy will be protected for another 12 months (the configured gfs retention period).
Product Management Analyst @ Veeam Software
jcofin13
Service Provider
Posts: 66
Liked: 1 time
Joined: Feb 01, 2016 10:09 pm
Contact:

Re: Cloud Capacity Teir in SOBR

Post by jcofin13 »

confusing. SO then ....if i have GFS on my jobs set to:

4 weekly ->set under Keep certain full backups longer for archival purposes(GFS settings)
12 monthly ->set under Keep certain full backups longer for archival purposes (GFS settings)
3 yearly ->set under Keep certain full backups longer for archival purposes (GFS settings)

I need to wait the full 4 weeks / 12 months / 3 years before it starts offloading to AWS because we enabled MOVE after some of the GFS was running on the performance tier? Basically if you dont enable move before any of the weekly/monthy/yearly jobs run it wont do anything to free up space locally and you have no real option to offload local performance tier to the capacity tier to free up local space.
Mildur
Product Manager
Posts: 8735
Liked: 2296 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Cloud Capacity Teir in SOBR

Post by Mildur »

It offloads directly if you use move policy and copy policy combined. Offloading doesn‘t mean, it removes something. Offloading means copying or moving the backup files to the capacity tier.
Only Move policy will remove the restore points on the performance tier.

The issue here is, you enabled move policy not from the start. You are trying now to remove gfs restore points from the hardened repo with the move policy, which are immutable. That‘s simply not possible.
Veeam cannot change a immutability date to an earlier date as the one already stored in the linux filesystem.
They will be deleted when they aged out.
I need to wait the full 4 weeks / 12 months / 3 years before it starts offloading to AWS because we enabled MOVE after some of the GFS was running on the performance tier?
No, of course not. They are already offloaded (copied, you told us in the starter post, you have copy already enabled).

For the removal of the old gfs restore points on the performance tier:
Each month, one of the monthly will be out of his configured retention period and will be deleted by veeam. Not all monthly gfs restore point do have the SAME immutability date. You don‘t need to wait an entire year before „every monthly is deleted together“.
Same goes for the yearly restore points. Each year one gets deleted.
Product Management Analyst @ Veeam Software
Post Reply

Who is online

Users browsing this forum: No registered users and 17 guests