-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Apr 30, 2014 10:52 am
- Full Name: Steve
- Contact:
GFS Copy job to manage retention on the same repo.
To keep a long story short, a Veeam Support Engineer recently suggested that we could handle our backup file retention using a GFS copy job rather than running a separate annual job.
I like the idea, so I setup a GFS backup copy job this morning to test it, however the job won't run because the source and target repositories are the same. This job is only being used to handle file retention, more specifically to satisfy our requirement to keep an annual backup, so pointing it at a different target repository isn't really a requirement for our use case. I'm sure a separate source & destination repo would perform better, but everything from the backup server to the repository storage is outside of our production environment so the only effect that will have is a decreased backup window.
Is there a registry flag to configure the copy job to ignore this warning, or would we have to setup another repository in order to use GFS-based retention?
I like the idea, so I setup a GFS backup copy job this morning to test it, however the job won't run because the source and target repositories are the same. This job is only being used to handle file retention, more specifically to satisfy our requirement to keep an annual backup, so pointing it at a different target repository isn't really a requirement for our use case. I'm sure a separate source & destination repo would perform better, but everything from the backup server to the repository storage is outside of our production environment so the only effect that will have is a decreased backup window.
Is there a registry flag to configure the copy job to ignore this warning, or would we have to setup another repository in order to use GFS-based retention?
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: GFS Copy job to manage retention on the same repo.
Hi,
Did I get it right - you basically want to use a copy job to make copies of backup files produced by a simple backup job and place them on the very same repo thus managing the backup files retention, is that correct?
Thanks
Did I get it right - you basically want to use a copy job to make copies of backup files produced by a simple backup job and place them on the very same repo thus managing the backup files retention, is that correct?
Thanks
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Jan 11, 2017 7:55 pm
- Full Name: Seth J
- Contact:
Re: GFS Copy job to manage retention on the same repo.
I actually came here looking for the same thing: I just want an 'end of month' that is kept for a year. Backup Copy won't allow source/target to be same repo. Do I really have to setup a separate set of jobs for my month end?
-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Apr 30, 2014 10:52 am
- Full Name: Steve
- Contact:
Re: GFS Copy job to manage retention on the same repo.
Yes PTide,
We would run a simple backup job as the source which would keep the minimum number of restore points, and from that a backup copy job would run to manage an annual grandfather restore point along with regular father/son restore points. All restore points from the simple backup job & the copy job would be on the same repo...ideally the simple backup job would be configured to handle the GFS rotation, but it seems like that type of restore point retention is only available with copy jobs.
Thanks
We would run a simple backup job as the source which would keep the minimum number of restore points, and from that a backup copy job would run to manage an annual grandfather restore point along with regular father/son restore points. All restore points from the simple backup job & the copy job would be on the same repo...ideally the simple backup job would be configured to handle the GFS rotation, but it seems like that type of restore point retention is only available with copy jobs.
Thanks
-
- Service Provider
- Posts: 43
- Liked: 15 times
- Joined: May 07, 2013 2:50 pm
- Full Name: James Davidson
- Location: Northeast UK
- Contact:
Re: GFS Copy job to manage retention on the same repo.
You can setup two repositories on the same underlying storage volume.
E.g if you are using F:\Backups as your main backup job repo then you can create a new repo called F:\GFS and use that as a target for the Backup Copy job.
It's the same underlying disk, but has two Veeam repos on it.
Obviously if you lose the disk then you've lost your recent backups and long term retention so this doesn't meet the 3-2-1 best practice.
E.g if you are using F:\Backups as your main backup job repo then you can create a new repo called F:\GFS and use that as a target for the Backup Copy job.
It's the same underlying disk, but has two Veeam repos on it.
Obviously if you lose the disk then you've lost your recent backups and long term retention so this doesn't meet the 3-2-1 best practice.
@jam_davidson
-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Apr 30, 2014 10:52 am
- Full Name: Steve
- Contact:
Re: GFS Copy job to manage retention on the same repo.
We also go to tape stored on-site & to offsite rotating disks, so we still hit 3-2-1.jdavidson_waters wrote:Obviously if you lose the disk then you've lost your recent backups and long term retention so this doesn't meet the 3-2-1 best practice.
Right now we run an additional VMWare Backup job once a year to create an annual backup, the GFS copy job would replace that by maintaining an annual grandfather.
Honestly this would be much easier if the source VMware backup job supported GFS-style retention, because all we're doing is using a backup copy job to supplement the simple retention policy that we're restricted to in the source job.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: GFS Copy job to manage retention on the same repo.
James is spot on - you can create another repo folder on the same volume.
Thanks
There is an existing thread for that feature request, feel free to post there.Honestly this would be much easier if the source VMware backup job supported GFS-style retention
Thanks
-
- Enthusiast
- Posts: 25
- Liked: 1 time
- Joined: Nov 19, 2015 10:00 am
- Contact:
[MERGED] GFS without backup copy jobs?
I've just found out that copy jobs do not like the source & destination being on the same server. We've been using them for 7 year GFS retention.
We've recently updated our backup server hardware from 3 small capacity servers (1x 7TB, 2x 32TB) to two large capacity servers (2x 92TB) and intended to mirror them using rsync.
since backup copy jobs do no work if source&destination are on the same server, what would be the correct way to restructure our job design?
we currently take weekly incrementals with monthly full.
GFS copy jobs then ran on these backup jobs with a policy of 4x monthlies, 5x 3 month and 7x yearly.
We've recently updated our backup server hardware from 3 small capacity servers (1x 7TB, 2x 32TB) to two large capacity servers (2x 92TB) and intended to mirror them using rsync.
since backup copy jobs do no work if source&destination are on the same server, what would be the correct way to restructure our job design?
we currently take weekly incrementals with monthly full.
GFS copy jobs then ran on these backup jobs with a policy of 4x monthlies, 5x 3 month and 7x yearly.
-
- Veteran
- Posts: 1943
- Liked: 247 times
- Joined: Dec 01, 2016 3:49 pm
- Full Name: Dmitry Grinev
- Location: St.Petersburg
- Contact:
Re: GFS without backup copy jobs?
Hi,
You can create another repository on the same volume, however, it doesn't meet best practices and 3-2-1 rule.
GFS retention cannot create and retain its restore points without a backup chain of the copy job.
Please review this thread for additional information. Thanks!
You can create another repository on the same volume, however, it doesn't meet best practices and 3-2-1 rule.
GFS retention cannot create and retain its restore points without a backup chain of the copy job.
Please review this thread for additional information. Thanks!
-
- Enthusiast
- Posts: 66
- Liked: 5 times
- Joined: Jan 30, 2018 12:06 pm
- Full Name: Simon Osborne
- Contact:
Re: GFS Copy job to manage retention on the same repo.
I have just run into this same issue today. Now I can understand the spirit of this requirement what with the 3-2-1 rule. And we will be having everything copied offsite to an identical server/disk setup anyway, with key servers also being replicated.
But what we like so many others are trying to achieve is a monthly and yearly retention off of the original backup job. So that we can ultimately have all of our short term backups and long term backups at both sites in one repository (once I setup another backup copy to do that).
It seems silly to add this restriction in especially as we lack the ability to do GFS on the original job. Is it likely in the future to be able to either override the target repository requirement and/or set GFS in the original backup job? After all we are all adults and as long as we are explicitly accepting the consequences then no harm surely.
But now I have to create a separate folder on the same physical RAID set as the source repository called GFS or archive and then create another repository which I point at this folder. Its not a massive chore but it seems unnecessary.
After all the first place you tend to need your archival data is in your main site?
But what we like so many others are trying to achieve is a monthly and yearly retention off of the original backup job. So that we can ultimately have all of our short term backups and long term backups at both sites in one repository (once I setup another backup copy to do that).
It seems silly to add this restriction in especially as we lack the ability to do GFS on the original job. Is it likely in the future to be able to either override the target repository requirement and/or set GFS in the original backup job? After all we are all adults and as long as we are explicitly accepting the consequences then no harm surely.
But now I have to create a separate folder on the same physical RAID set as the source repository called GFS or archive and then create another repository which I point at this folder. Its not a massive chore but it seems unnecessary.
After all the first place you tend to need your archival data is in your main site?
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Jun 14, 2019 1:32 pm
- Contact:
Re: GFS Copy job to manage retention on the same repo.
+1 on getting GFS into simple backup job as soon as possible. Will save me time, disk space and cut down on the jobs I need to keep my data where I need it. To much data moving with multiple backup copy jobs for one simple backup. Having to go to the same disk but different repository for GFS on dedup appliance is a waste of time and space.
Thanks
Thanks
-
- Veeam ProPartner
- Posts: 300
- Liked: 44 times
- Joined: Dec 03, 2015 3:41 pm
- Location: UK
- Contact:
Re: GFS Copy job to manage retention on the same repo.
I'm setting this up now, using the 'different folder on the same storage' method.
The 3-2-1 rule isn't an issue here - as this is only for our Test/Dev VMs only.
Basically, we don't want them on our expensive DD storage, which is fast running out of space - but would ideally like to keep more monthly retention points than the Simple Backup Job allows.
At the same time, in the case of a DR situation - they're only Test/Dev servers, so no 3-2-1.
My question is this:
The source repo is a Scale Out Repository, using the local DAS storage of the Veeam servers, reFS formattted.
The destination repo is a Scale Out Repository - using different folders, on the same local storage.
Is it better to synthesize the restore points, in the Backup Copy Job - or Read the entire restore point from source backup?
Which is the most storage-efficient method - when the source and backup repos are on the same spindles? (or frequently the same - as there are 3 Scale Out nodes)
The 3-2-1 rule isn't an issue here - as this is only for our Test/Dev VMs only.
Basically, we don't want them on our expensive DD storage, which is fast running out of space - but would ideally like to keep more monthly retention points than the Simple Backup Job allows.
At the same time, in the case of a DR situation - they're only Test/Dev servers, so no 3-2-1.
My question is this:
The source repo is a Scale Out Repository, using the local DAS storage of the Veeam servers, reFS formattted.
The destination repo is a Scale Out Repository - using different folders, on the same local storage.
Is it better to synthesize the restore points, in the Backup Copy Job - or Read the entire restore point from source backup?
Which is the most storage-efficient method - when the source and backup repos are on the same spindles? (or frequently the same - as there are 3 Scale Out nodes)
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: GFS Copy job to manage retention on the same repo.
Synthetic GFS will be using FastClone, so will be faster.
Who is online
Users browsing this forum: No registered users and 45 guests