Comprehensive data protection for all workloads
Post Reply
jmc
Service Provider
Posts: 91
Liked: 8 times
Joined: Sep 12, 2011 11:49 am
Full Name: jmc
Location: Duisburg - Germany
Contact:

BC: only the old restore points

Post by jmc »

hello all,

I have a question or a task to the following facts:

A B&R 11a with primary repository contains data of a vm of 365 days the job is set to 1 year. among them are also 52 full backups. i now want to swap the full backups out of an archive repository. but only those that are older than 365 days. so those full backups that would fall out in the rotation now.

in summary, veeam should have the current 365 days on the primary storage and the older ones on the archive.

with a normal copy job it backup all data to the archive storage, the old ones older than 1 year AND the newer ones that i don't want there.

what can i do without having to use a powershell script. i am not a PS programmer.

thanks
jeff
Everybody ask why the dinosaurs are gone - nobody ask why they are lived so long
PetrM
Veeam Software
Posts: 3264
Liked: 528 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: BC: only the old restore points

Post by PetrM »

Hi Jeff,

Would you be so kind to clarify what storage do you use as an archive repository? If it's an object storage, you can add it as an extent to the Scale-Out Backup Repository and offload data older than the specified number of days to the Capacity Tier using Move policy.

If it's a standard on-premise repository, then backup copy with GFS might be a better approach: you can store the desired number of yearly backups and respect the 3-2-1 rule at the same time.

Thanks!
jmc
Service Provider
Posts: 91
Liked: 8 times
Joined: Sep 12, 2011 11:49 am
Full Name: jmc
Location: Duisburg - Germany
Contact:

Re: BC: only the old restore points

Post by jmc »

hello PetrM,

thanks for the answer. it's the 2nd variant. a local storage. however, the gfs doesn't help me, because i can say here i want to have e.g. 52 week backups but the 52 are always the ones coming from the main backup. if the main backup also saves 52 backups, then on both storages are the same restore points. the problem is that in this particular case the main storage has 52 weekly backups. that's about 150 tb of data. if i move these unwanted ones to an archive storage via bc and gfs i have already used up the 150 tb. the system is supposed to backup continuously and only keep the ones that are NOT on the main storage anymore. so the older ones from week 53 and following.

btw:
i only need 160 weeks of full backups. no monthly or quarterly or yearly backups.

thanks
jeff
Everybody ask why the dinosaurs are gone - nobody ask why they are lived so long
PetrM
Veeam Software
Posts: 3264
Liked: 528 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: BC: only the old restore points

Post by PetrM »

Hi Jeff,

Backup copy job has its own retention policy which works independently on the primary backup job retention, you can configure short-term and long-term policies for backup copy depending on the amount of space on the secondary storage. Just keep in mind that the main purpose of 3-2-1 rule is to ensure data redundancy by having copies of the same data on different medias, it's better to stick with this strategy.

Thanks!
jmc
Service Provider
Posts: 91
Liked: 8 times
Joined: Sep 12, 2011 11:49 am
Full Name: jmc
Location: Duisburg - Germany
Contact:

Re: BC: only the old restore points

Post by jmc »

hello PetrM,

you are right, but when i activate a bc from a job, the the bc copy the new data from the source to the target. i can say that the bc should hold x weeks, y month ... BUT i can not say the bc job: hey bc begin to start copy restore points after x weeks of source points. when i start a bc job, then the job begins to copy new restore points from the STARTDATE of the bc job with gfs rules.

when i have a target with 10 restore points in teh past and i start NOW the bc job, then the bc copy only last and the upcomming new restore point.

that is exact that what i dont want. i want that only the restore points that would be removed from source before the will deleted.

a ps script should do like this:
- how much backups on the source.
- what is the last source full backup restore points date
- is that date close to 365 days.
- if younger than, do nothing
- if close to the year (might be up to 1o 10 days) then copy to the source.

in this case i have 1 your restore points on the source and all older at the target.

thx
jeff
Everybody ask why the dinosaurs are gone - nobody ask why they are lived so long
PetrM
Veeam Software
Posts: 3264
Liked: 528 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: BC: only the old restore points

Post by PetrM »

Hi Jeff,

The use case is clear and it's covered by Move policy which offloads data to the Capacity Tier. The backup copy job copies the actual state of data and also provides flexible GFS policy.

From my point of view, backup copy with GFS might be a solid way but I agree that it does not fit your requirement. Let's note it as a feature request for future versions, however we don't have enough similar requests so far to prioritize it. Basically, the algorithm of script seems to be correct but I didn't test it and cannot comment on reliability of such an approach.

Thanks!
Post Reply

Who is online

Users browsing this forum: Google [Bot] and 130 guests