-
- Novice
- Posts: 7
- Liked: 2 times
- Joined: Mar 03, 2015 6:26 pm
- Full Name: Steve Martens
- Contact:
Long Retention Best Practice
Hi All,
I'm brand new to Veeam B&R and I'm trying to figure out the best way to do some very long retention.
I have one Hyper-V VM that I want to keep for a long time. Data on this VM needs daily recovery points for 60 days so I have configured a forward incremental backup job to do that. The retention policy is set to 60 restore points, the job runs every day, with an Active Full Backup on the last Friday of each month. If I understand this correctly, I will have a minimum of three full backups (Day 1, Day 30, and Day 60) on disk and the oldest full backup with subsequent incrementals will be kept until the fourth full is created. After the fourth full is created, the oldest full and subsequent 30ish incrementals will be removed.
That's all I really need on a daily basis, but I need to archive a full backup monthly for 10 years. What is the best way to accomplish this?
Any advice would be great!
Thanks!
Steve
I'm brand new to Veeam B&R and I'm trying to figure out the best way to do some very long retention.
I have one Hyper-V VM that I want to keep for a long time. Data on this VM needs daily recovery points for 60 days so I have configured a forward incremental backup job to do that. The retention policy is set to 60 restore points, the job runs every day, with an Active Full Backup on the last Friday of each month. If I understand this correctly, I will have a minimum of three full backups (Day 1, Day 30, and Day 60) on disk and the oldest full backup with subsequent incrementals will be kept until the fourth full is created. After the fourth full is created, the oldest full and subsequent 30ish incrementals will be removed.
That's all I really need on a daily basis, but I need to archive a full backup monthly for 10 years. What is the best way to accomplish this?
Any advice would be great!
Thanks!
Steve
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Long Retention Best Practice
Hi Steve,
Your understanding of forward incremental method is correct.
Regarding Full monthly backups. Do you have a secondary repository to fulfill 3-2-1 backup rule? If yes and you are using backup copy job for it(best practice), I would suggest to use GFS retention to keep 120 weekly backups.
If you don`t have a secondary repository, I would suggest to create another backup job with 120 restore points to run only active fulls monthly.
Thanks.
Your understanding of forward incremental method is correct.
Regarding Full monthly backups. Do you have a secondary repository to fulfill 3-2-1 backup rule? If yes and you are using backup copy job for it(best practice), I would suggest to use GFS retention to keep 120 weekly backups.
If you don`t have a secondary repository, I would suggest to create another backup job with 120 restore points to run only active fulls monthly.
Thanks.
-
- Novice
- Posts: 7
- Liked: 2 times
- Joined: Mar 03, 2015 6:26 pm
- Full Name: Steve Martens
- Contact:
Re: Long Retention Best Practice
Thanks Shestakov,
My second repository is not set up yet, but will be soon. I have modified the original job to take active full backups once/week. I have set up a backup copy job (temporarily using the same repository) to keep 66 restore points. The backup copy job keeps 4 weekly, 12 monthly, 40 quarterly, and 10 annual restore points that span 10 years (totalling 66 restore points). When my second repository is set up (via Cloud Connect) I will edit the Backup Copy job to point to that repository.
I think that should do what I need.
Thank you!
My second repository is not set up yet, but will be soon. I have modified the original job to take active full backups once/week. I have set up a backup copy job (temporarily using the same repository) to keep 66 restore points. The backup copy job keeps 4 weekly, 12 monthly, 40 quarterly, and 10 annual restore points that span 10 years (totalling 66 restore points). When my second repository is set up (via Cloud Connect) I will edit the Backup Copy job to point to that repository.
I think that should do what I need.
Thank you!
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Long Retention Best Practice
Just a remark that if you enable GFS retention and set numbers of weekly/monthly/quarterly/yearly restore points, there is no need to set number of restore points as a sum of periodic ones. Doing that you will have 66*2=132 restore points. Restore points to keep shows number of usual restore points in the single chain.
Thanks!
Thanks!
-
- Novice
- Posts: 7
- Liked: 2 times
- Joined: Mar 03, 2015 6:26 pm
- Full Name: Steve Martens
- Contact:
Re: Long Retention Best Practice
Great note, thanks! Would you then recommend setting the number of restore points to a minimum? As this is a backup copy job, I'm assuming it will not affect them number of restore points set in the original backup job?
Steve
Steve
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Long Retention Best Practice
You are correct it will not affect the source backup job in any way. Here is a good description of the best practices. Please review.
-
- Product Manager
- Posts: 20450
- Liked: 2318 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Long Retention Best Practice
Is the very same repository or those jobs are pointed at least to two different folders on the same physical server? I'm wondering because there is built-in mechanism preventing backup copy job from looking restore points in the repository that is specified as backup copy job target. In this case, a backup copy job will be sitting in the idle state forever.I have set up a backup copy job (temporarily using the same repository) to keep 66 restore points.
Thanks.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Long Retention Best Practice
And once you have a secondary repository before switching the target of the job, you can manually move there all created backups(seeding) and map them.
Thanks.
Thanks.
-
- Novice
- Posts: 7
- Liked: 2 times
- Joined: Mar 03, 2015 6:26 pm
- Full Name: Steve Martens
- Contact:
Re: Long Retention Best Practice
Actually, I have not enabled the Backup Copy jobs yet. I built them pointing to the exact same (only) repository I have, but they are not running yet. When are second repository (off site) is available, I'll edit those jobs so they write the copy to a portable device which will then be physically moved to the off site repository and seeded there.v.Eremin wrote: Is the very same repository or those jobs are pointed at least to two different folders on the same physical server? I'm wondering because there is built-in mechanism preventing backup copy job from looking restore points in the repository that is specified as backup copy job target. In this case, a backup copy job will be sitting in the idle state forever.
Thanks.
-
- Product Manager
- Posts: 20450
- Liked: 2318 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Long Retention Best Practice
Got it, was just trying to point out source of potential confusion, as some of our customers using the same repository for both backup and backup copy jobs doesn't understand why the latter sits forever in "waiting" state. Thanks.
Who is online
Users browsing this forum: Google [Bot] and 46 guests