Comprehensive data protection for all workloads
Post Reply
vmff
Influencer
Posts: 23
Liked: 2 times
Joined: Mar 03, 2015 6:24 pm
Contact:

Feature Request - Active Full Retention Options

Post by vmff »

I'm not sure of the reasoning behind retaining an active full restore point until the next active full, but it seems like it should be possible to not retain it if I wish. If I'd like to do an active full to actually check the source data as a means of health checking and avoiding defragmentation and compression tasks, I'd still like to be able to have my retention settings take affect.

The area where this really becomes a problem, in my case, when I am doing both health checking and active fulls. I health check weekly so that I can have active fulls run much further apart, perhaps twice a year? This obviously doesn't work and support recommended that I manually run the active fulls instead of scheduling them (inconvenient) or setting up some sort of scripting to kick it off (also inconvenient).

I think I'd like to have this option, or at least understand why I can't. Thanks!
nielsengelen
Product Manager
Posts: 5796
Liked: 1215 times
Joined: Jul 15, 2013 11:09 am
Full Name: Niels Engelen
Contact:

Re: Feature Request - Active Full Retention Options

Post by nielsengelen »

We keep the active full point as it is needed for the incrementals related to it. If we remove the full backup all the incrementals will become obsolete.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
vmff
Influencer
Posts: 23
Liked: 2 times
Joined: Mar 03, 2015 6:24 pm
Contact:

Re: Feature Request - Active Full Retention Options

Post by vmff »

Sure, but normally in a forever forward, it will roll the vib's back into the vbk when the retention period has been met.
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Feature Request - Active Full Retention Options

Post by PTide »

Hi,
Sure, but normally in a forever forward, it will roll the vib's back into the vbk when the retention period has been met.
Do you mean that you'd like the rearmost .vbk to get merged in case of forward-incremental mode instead of keeping it until another backup chain meets the retention (like in this animation)?
vmff
Influencer
Posts: 23
Liked: 2 times
Joined: Mar 03, 2015 6:24 pm
Contact:

Re: Feature Request - Active Full Retention Options

Post by vmff »

This one is a bit hard to explain. I'll do my best and see if you can interpret what I mean.

A regular incremental job has a full vbk at the beginning of the chain, with incremental vib's out to as many restore points as are selected. When the increments reach that point, the oldest incremental vib is applied against the original vbk. (I understand this is called forever forward).

Full vbk -> vib 1 -> vib 2 -> vib 3 -> vib 4 -> vib 5
[Full vbk <- vib 1] -> vib 2 -> vib 3 -> vib 4 -> vib 5 -> vib 6
etc
I'm trying to understand, why does creating an active full interrupt this process? Currently, an active full, requires another active full to start rolling up the vib's again. It looks like this currently:
Full vbk -> vib 1 -> vib 2 -> vib 3 -> vib 4 -> vib 5 -> ACTIVE FULL vbk (new) -> vib 1 -> vib 2 -> vib 3 -> vib 4 -> vib 5 -> vib 6 -> vib 7 -> vib 8 -> vib 9 -> vib (etc. etc. etc. until active full runs again. Retention policy is lost)

I'd suggest an option that would allow active full to not override retention policy, like so (on 5 restore points requested):
Full vbk -> vib 1 -> vib 2 -> vib 3 -> vib 4 -> vib 5 -> ACTIVE FULL vbk -> vib 1 -> vib 2 -> vib 3 -> vib 4 -> vib 5 (new chain meets restore point requirement)
delete original chain
[ACTIVE FULL vbk <- vib 1] -> vib 2 -> vib 3 -> vib 4 -> vib 5 -> vib 6 (just as it works normally, forever forward again, respecting retention settings)

This scenario isn't an issue if your active fulls are frequent, but go to a few times a year, and you have a real problem with lower retention settings. We have certain requirements that are hard to justify fully trusting the maintenance tasks and synthetic fulls, which make active fulls a good fit - so the original data is referenced again).

Hope this makes some sense.
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Feature Request - Active Full Retention Options

Post by PTide »

Full vbk -> vib 1 -> vib 2 -> vib 3 -> vib 4 -> vib 5 -> ACTIVE FULL vbk (new) -> vib 1 -> vib 2 -> vib 3 -> vib 4 -> vib 5 -> vib 6 -> vib 7 -> vib 8 -> vib 9 -> vib (etc. etc. etc. until active full runs again. Retention policy is lost)
That looks like a bug as the older chain should have been deleted right after the later chain met the retention. I suggest you to contact support team on that issue. Pleas don't forget to post your case ID here.

Thank you.
vmff
Influencer
Posts: 23
Liked: 2 times
Joined: Mar 03, 2015 6:24 pm
Contact:

Re: Feature Request - Active Full Retention Options

Post by vmff »

Thanks for the response. It was actually the support department that told me to come on here and make a comment about this, as they explained to me that checking the active fulls option opens up odd retention behaviors.

They demonstrated this to me with the retention point simulator at: http://rps.dewin.me/

The behavior is reflected there as well. If you request any active full backup to run, it throws all retention settings away and just waits for the next active full to run (which is a real problem on a bi-annual schedule, if you take a look at the simulator).

If this isn't the case, and it is a bug, I'll gladly get back in touch with support to work on it, rather than getting up in the middle of the night occasionally to manually run active fulls. :)

Case #01790166
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Feature Request - Active Full Retention Options

Post by PTide »

That's either a misunderstanding took place or a severe bug. Could you please describe your backup job settings in details: retention, backup mode, active fulls schedule, and job schedule.

Thank you.
foggy
Veeam Software
Posts: 21138
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Feature Request - Active Full Retention Options

Post by foggy »

vmff wrote:Thanks for the response. It was actually the support department that told me to come on here and make a comment about this, as they explained to me that checking the active fulls option opens up odd retention behaviors.
Probably confusion is between manually running active full (which incurs the behavior you're expecting) and enabling active full in the job (which switches forever forward incremental into regular forward incremental). Anyway, the behavior you're describing is not expected.
vmff
Influencer
Posts: 23
Liked: 2 times
Joined: Mar 03, 2015 6:24 pm
Contact:

Re: Feature Request - Active Full Retention Options

Post by vmff »

foggy wrote:Probably confusion is between manually running active full (which incurs the behavior you're expecting) and enabling active full in the job (which switches forever forward incremental into regular forward incremental). Anyway, the behavior you're describing is not expected.
Yes, they talked about the mode change behavior, but it is unclear why it will retain multiple vbk's and potentially hundreds of vib's (as depicted in the restore point simulator) depending on the frequency selection of active fulls.

I'm running with the following setup:
VM selection via vCenter container.
Storage to local repository, 30 restore points, incremental mode (no synthetics), create active full (monthly, last saturday, selected months: Jan. Jul.)
-maintenance: storage-level corruption guard weekly, no defragment & compact.
Couple of secondary targets setup.
Backup scheduled daily (m,t,w,th,f).
tdewin
Veeam Software
Posts: 1818
Liked: 655 times
Joined: Mar 02, 2012 1:40 pm
Full Name: Timothy Dewin
Contact:

Re: Feature Request - Active Full Retention Options

Post by tdewin »

As said, when you schedule active fulls, you disable forever incremental. That means no more merging of vbk and vib (backup files are not changed). So the only mechanism to delete something, is to wait for the next active full + increments that satisfy the retention policy. This means that:
-before the active chain has reached the policy, it has to keep the previous chain
-after the active chain has reached the policy, it can delete the previous chain, but now this chain can grow again uptil the next active full interval so a chain can grow bigger than the policy
-> you get maximum 1 active full interval + the policy

When you manually do an active full, you don't disable forever incremental. Thus once the retention is breached, the active chain should start merging. That means that your active chain will never grow bigger than the retention policy
-before the active chain has reached the policy, it has to keep the previous chain
-after the active chain has reached the policy, it can delete the previous chain, but now it also starts merging, and thus a chain will not grow bigger than the policy.
-> you get maximum 2x the policy whenever you do an active full
This can not be simulated with RPS, because it is a manual action

What you described should not happen, that the active chain is bigger than the policy but previous chains are not being deleted. Once the active chain satisfies the policy, there is no reason to keep previous chain. Maybe you got confused because the Veeam GUI shows newest restore points at the top, but RPS shows the newest points at the bottom?
humannate
Influencer
Posts: 10
Liked: 1 time
Joined: Dec 27, 2015 2:33 am
Full Name: Nate Cartwright
Contact:

Re: Feature Request - Active Full Retention Options

Post by humannate »

I think it would still be nice if you could enable active full but after retention policy kicks in it could turn into forever incremental. This would help reduce bandwidth when rsyncing differentials but prevent bit rot with active full backups. E.G. with a 14 day retention and monthly active full run on the 1st, on the 15th day of the month it would merge the full backup from the 1st with the incremental from the 2nd.

If the checkbox for "Storage level corruption guard" performs the same action, then this could be accomplished by enabling reverse incrementals and enabling storage level corruption guard.
foggy
Veeam Software
Posts: 21138
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Feature Request - Active Full Retention Options

Post by foggy »

Storage level corruption guard calculates CRC values for data blocks stored in the backup and compares them with the values saved previously. In case it detects any corruptions, it replaces the block with its valid copy, reading it from the source. Active full reads and copies all the data (entire VM image) from production storage.
Post Reply

Who is online

Users browsing this forum: praveen.sharma and 152 guests