-
- Influencer
- Posts: 12
- Liked: never
- Joined: Feb 03, 2015 6:53 am
- Full Name: Daniel Rogers
- Contact:
Backup lifecycle advice
Hello.
I was hoping someone could offer some insight/advice. I would like to set the following up:
Daily (INCR) - 8 day retention
Weekly (FULL) - 5 week retention
Monthly (FULL) - 13 months retention
Yearly (FULL) - 2 year retention
I would like the file naming format to reflect the type of backup, so I believe I need 4 jobs each named accordingly.
The reason for the naming is we wish to use AWS S3 lifecycle policies through the console and they are quite simplistic.
I have looked over the GFS in Veeam 8 and it looks like it has promise but I don't think I quite understand how it could be configured to do what I need. I could create 4 different jobs but I would like to avoid the job stacking where the weekly + monthly run either at the same time or in close enough proximity for one to be moot. A rolling weekly full with the ability to somehow rename and tag one as a monthly/yearly would be ideal.
Hopefully that's clear, any advice would be really appreciated!
Kind regards,
Daniel
I was hoping someone could offer some insight/advice. I would like to set the following up:
Daily (INCR) - 8 day retention
Weekly (FULL) - 5 week retention
Monthly (FULL) - 13 months retention
Yearly (FULL) - 2 year retention
I would like the file naming format to reflect the type of backup, so I believe I need 4 jobs each named accordingly.
The reason for the naming is we wish to use AWS S3 lifecycle policies through the console and they are quite simplistic.
I have looked over the GFS in Veeam 8 and it looks like it has promise but I don't think I quite understand how it could be configured to do what I need. I could create 4 different jobs but I would like to avoid the job stacking where the weekly + monthly run either at the same time or in close enough proximity for one to be moot. A rolling weekly full with the ability to somehow rename and tag one as a monthly/yearly would be ideal.
Hopefully that's clear, any advice would be really appreciated!
Kind regards,
Daniel
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Backup lifecycle advice
Hello Daniel and welcome to the forum!
Backup copy job with GFS retention policy is exactly what you need to avoid backup files duplication.
You just need to create backup job and backup copy job with corresponding retention. Please get familiar with the provided links and ask additional questions if you have any. Thanks.
Backup copy job with GFS retention policy is exactly what you need to avoid backup files duplication.
You just need to create backup job and backup copy job with corresponding retention. Please get familiar with the provided links and ask additional questions if you have any. Thanks.
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Feb 03, 2015 6:53 am
- Full Name: Daniel Rogers
- Contact:
Re: Backup lifecycle advice
Hi Shestakov,
Thanks for your reply and welcome
I have read over the documentation again and thought this may be close to what I am after. However I still can't think of a reliable way to name the backup files appropriately. Prefixs of daily- weekly- monthly- yearly-
Backup Job
- 8 Restore points
- No active fulls (these aren't required in this config?)
- Runs Daily 10pm Mon - Friday
- Configured with secondary target (Backup Job Copy)
Backup Job Copy
- Copy every 1 day @ 8:00AM
- Keep 8 restore points (should this number match up to the restore points specified in Backup Job?)
- Archival settings
- Weekly : 5 : Saturday 10PM
- Monthly : 13 : 1st Saturday of month
- Yearly : 2 : First Sunday of year
Many thanks for your insight.
Daniel
Thanks for your reply and welcome
I have read over the documentation again and thought this may be close to what I am after. However I still can't think of a reliable way to name the backup files appropriately. Prefixs of daily- weekly- monthly- yearly-
Backup Job
- 8 Restore points
- No active fulls (these aren't required in this config?)
- Runs Daily 10pm Mon - Friday
- Configured with secondary target (Backup Job Copy)
Backup Job Copy
- Copy every 1 day @ 8:00AM
- Keep 8 restore points (should this number match up to the restore points specified in Backup Job?)
- Archival settings
- Weekly : 5 : Saturday 10PM
- Monthly : 13 : 1st Saturday of month
- Yearly : 2 : First Sunday of year
Many thanks for your insight.
Daniel
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Backup lifecycle advice
Daniel,
Glad you have more understanding now. Your plan looks sane.
I just have some comments.
Thanks.
Glad you have more understanding now. Your plan looks sane.
I just have some comments.
They are not required by the config and here is a post explaining why. However, if it`s required by your company`s policy, you can schedule it.daniel_rogers wrote:Backup Job...
... No active fulls (these aren't required in this config?)
It`s not obligatory, up to you.daniel_rogers wrote: Backup Job Copy...
...Keep 8 restore points (should this number match up to the restore points specified in Backup Job?)
Thanks.
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Feb 03, 2015 6:53 am
- Full Name: Daniel Rogers
- Contact:
Re: Backup lifecycle advice
Thank you again for your reply.
Would it be possible to set up this configuration:
Backup Job: Daily_
Backup Copy Jobs (3 separate jobs): Weekly_, Monthly_, Yearly_
Each backup copy job has the respective time frames of (5 weeks, 13 months, 2 years)
This way the files would be created with the appropriate names. Thanks for your assistance
Daniel
Would it be possible to set up this configuration:
Backup Job: Daily_
Backup Copy Jobs (3 separate jobs): Weekly_, Monthly_, Yearly_
Each backup copy job has the respective time frames of (5 weeks, 13 months, 2 years)
This way the files would be created with the appropriate names. Thanks for your assistance
Daniel
-
- Product Manager
- Posts: 20450
- Liked: 2318 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Backup lifecycle advice
100 days is the longest sync interval you can specify for a backup copy job. So, the described approach wouldn't work, at least, for yearly job.
Anyway, is the naming such a big deal, when you can easily go to the GUI and see a corresponding label nearby GFS restore point ("W" - weekly, "Y" - yearly, etc.)?
Thanks.
Anyway, is the naming such a big deal, when you can easily go to the GUI and see a corresponding label nearby GFS restore point ("W" - weekly, "Y" - yearly, etc.)?
Thanks.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Backup lifecycle advice
Daniel,
Technically you can schedule second backup job to run annually, but indeed, the most elegant and least resource consumable solution is one backup job and once backup copy job, as was mentioned above.
Is naming the only obstacle? As Vladimir wrote, using GFS you will have "W", "M", "Y" labels.
Thanks.
Technically you can schedule second backup job to run annually, but indeed, the most elegant and least resource consumable solution is one backup job and once backup copy job, as was mentioned above.
Is naming the only obstacle? As Vladimir wrote, using GFS you will have "W", "M", "Y" labels.
Thanks.
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Feb 03, 2015 6:53 am
- Full Name: Daniel Rogers
- Contact:
Re: Backup lifecycle advice
Hello,
Thank you both for your replies, I really appreciate the assistance.
In an effort to understand how this works I have upgraded to Veeam 8 and set up the new policies. I understand what you mean about having just one backup copy job and have removed the others. I will find another way around the renaming. (incidentally the names are important for the AWS lifecycle policies - I will need a reliable way to determine what is a Daily/Weekly/Monthly/Yearly in order to apply policies that move the files to Glacier storage after x time).
I have found that the backup copy job needs to target a different repository otherwise it will not find new restore points! This was important
I have just simulated a 'sync' and I can see it creating a new .vbk in the backup copy job repository. I think I understand how it works now, please correct me if I am wrong:
The backup job is configured as Incremental, without synthetic fulls or active fulls. This runs at 10pm each weekday. At the time specified within backup copy job, (whether weekly/monthly etc) it will scan the repo that contains the incrementals and create a new .vbk in a separate repo.
Questions:
This .vbk file that is created by the backup copy job - is this akin to an active full backup?
Is the only difference between this method and an active full method that it is getting the data from another repository instead of the source?
If I copied only this .vbk file, would that be enough to perform a complete restore from?
If so, then sending the weekly/monthly/yearly .vbk files created by the backup copy job to AWS etc should be sufficient?
In the backup job, do I need to specify backup copy job as a secondary target in order for it to work or will it do it's thing anyway as long as it's created and enabled?
Thank you again!
Daniel
Thank you both for your replies, I really appreciate the assistance.
In an effort to understand how this works I have upgraded to Veeam 8 and set up the new policies. I understand what you mean about having just one backup copy job and have removed the others. I will find another way around the renaming. (incidentally the names are important for the AWS lifecycle policies - I will need a reliable way to determine what is a Daily/Weekly/Monthly/Yearly in order to apply policies that move the files to Glacier storage after x time).
I have found that the backup copy job needs to target a different repository otherwise it will not find new restore points! This was important
I have just simulated a 'sync' and I can see it creating a new .vbk in the backup copy job repository. I think I understand how it works now, please correct me if I am wrong:
The backup job is configured as Incremental, without synthetic fulls or active fulls. This runs at 10pm each weekday. At the time specified within backup copy job, (whether weekly/monthly etc) it will scan the repo that contains the incrementals and create a new .vbk in a separate repo.
Questions:
This .vbk file that is created by the backup copy job - is this akin to an active full backup?
Is the only difference between this method and an active full method that it is getting the data from another repository instead of the source?
If I copied only this .vbk file, would that be enough to perform a complete restore from?
If so, then sending the weekly/monthly/yearly .vbk files created by the backup copy job to AWS etc should be sufficient?
In the backup job, do I need to specify backup copy job as a secondary target in order for it to work or will it do it's thing anyway as long as it's created and enabled?
Thank you again!
Daniel
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Feb 03, 2015 6:53 am
- Full Name: Daniel Rogers
- Contact:
Re: Backup lifecycle advice
Sorry and one more question..
If I have GFS policy configured for weekly/monthly/yearly, won't these be doubling up occasionally?
e.g. the yearly running on a weekend will possibly run with the monthly and the weekly. Apart from the i/o I don't think this is a particular issue but it would probably take longer than the weekend to complete 3 such backups.
Thank you,
Daniel
If I have GFS policy configured for weekly/monthly/yearly, won't these be doubling up occasionally?
e.g. the yearly running on a weekend will possibly run with the monthly and the weekly. Apart from the i/o I don't think this is a particular issue but it would probably take longer than the weekend to complete 3 such backups.
Thank you,
Daniel
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Backup lifecycle advice
Daniel,
Note that, the first synchronization interval of the backup copy job always produces a full backup file. If the backup chain on the source backup repository was created using the forward incremental backup method, Veeam Backup & Replication copies data blocks from the first full backup and a set of incremental backups to form a full backup of a VM as of the most recent state.
Thanks.
Copied backup files have the same format as those created by backup jobs and you can use any data recovery option for them.daniel_rogers wrote:The backup job is configured as Incremental, without synthetic fulls or active fulls. This runs at 10pm each weekday. At the time specified within backup copy job, (whether weekly/monthly etc) it will scan the repo that contains the incrementals and create a new .vbk in a separate repo.
This .vbk file that is created by the backup copy job - is this akin to an active full backup?
Note that, the first synchronization interval of the backup copy job always produces a full backup file. If the backup chain on the source backup repository was created using the forward incremental backup method, Veeam Backup & Replication copies data blocks from the first full backup and a set of incremental backups to form a full backup of a VM as of the most recent state.
Again, backup copy job just copies existing backups to another repository.daniel_rogers wrote:Is the only difference between this method and an active full method that it is getting the data from another repository instead of the source?
Correct. Backup copy job creates full backup for the first run, but can copy incremental backups. You can restore from both full (.vbk) and incremental (.vib) backup files.daniel_rogers wrote:If I copied only this .vbk file, would that be enough to perform a complete restore from?
Correct.daniel_rogers wrote:If so, then sending the weekly/monthly/yearly .vbk files created by the backup copy job to AWS etc should be sufficient?
No, it`s not obligatory. When creating backup copy job, you can just choose the backup job which restore points you want to copy.daniel_rogers wrote:In the backup job, do I need to specify backup copy job as a secondary target in order for it to work or will it do it's thing anyway as long as it's created and enabled?
The files will not be duplicated. That`s one of main advantages of GFS retention.daniel_rogers wrote:If I have GFS policy configured for weekly/monthly/yearly, won't these be doubling up occasionally?
Thanks.
-
- Veeam Software
- Posts: 21144
- Liked: 2143 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Backup lifecycle advice
Each GFS restore point has a corresponding postfix in its file name (<filename>_WMQY.vbk), you can use this for determination.daniel_rogers wrote:I will find another way around the renaming. (incidentally the names are important for the AWS lifecycle policies - I will need a reliable way to determine what is a Daily/Weekly/Monthly/Yearly in order to apply policies that move the files to Glacier storage after x time).
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Feb 03, 2015 6:53 am
- Full Name: Daniel Rogers
- Contact:
Re: Backup lifecycle advice
Thank you both for your replies, they have helped me greatly.
The filename that was created was BCJ_2015-02-06T080000.vbk, there was not weekly suffix. Do I need to specify this change somewhere?Each GFS restore point has a corresponding postfix in its file name (<filename>_WMQY.vbk), you can use this for determination.
-
- Product Manager
- Posts: 20450
- Liked: 2318 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Backup lifecycle advice
What retention settings does the backup copy job have? It seems that weekly restore point has not been created yet, the GFS point will be created when current retention settings are reached, and the oldest restore point will be moved to the day specified for GFS restore point creation. Thanks.
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Feb 03, 2015 6:53 am
- Full Name: Daniel Rogers
- Contact:
Re: Backup lifecycle advice
The backup copy job has:
8 restore points
5 weekly (Sat 8:00am)
13 monthly (1st Sat of month)
2 yearly (1st Sat of year)
Currently I have it set to copy every 7 days and restricted the transfer time between 8am Saturday and 8am Monday.
The main backup job has been configured with 14 restore points. Runs weekdays 10pm.
Does that sound ok?
8 restore points
5 weekly (Sat 8:00am)
13 monthly (1st Sat of month)
2 yearly (1st Sat of year)
Currently I have it set to copy every 7 days and restricted the transfer time between 8am Saturday and 8am Monday.
The main backup job has been configured with 14 restore points. Runs weekdays 10pm.
Does that sound ok?
-
- Product Manager
- Posts: 20450
- Liked: 2318 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Backup lifecycle advice
As mentioned, you'll have to wait till there are 8 restore points in backup copy job chain, and, then, till the oldest restore point (.vbk) is moved to the corresponding day. Thanks.
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Feb 03, 2015 6:53 am
- Full Name: Daniel Rogers
- Contact:
Re: Backup lifecycle advice
So the frequency needs to be increased to every day? Otherwise it will be 7 weeks before it does this..so confusing!
-
- Veeam Software
- Posts: 21144
- Liked: 2143 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Backup lifecycle advice
Yes, besides, initially you were talking about daily copies:
daniel_rogers wrote:Daily (INCR) - 8 day retention
Who is online
Users browsing this forum: No registered users and 35 guests