-
- Novice
- Posts: 6
- Liked: 3 times
- Joined: Oct 12, 2013 4:10 am
- Full Name: Toon Vandendriessche
- Contact:
Re: GFS for primary backup jobs
For our small business customers the biggest problem with GFS is that it always creates a full backup file. If the want to store several weekly, monthly and yearly backups they need lots of data. And they don't have the budget for a dedup appliance.
Now we solve it by creating different (forever forward incremental) backup jobs with the same VM's and schedule one Job daily, one weekly, another monthly, ... But that's not the ideal solution. I hope GFS will be adjusted so it can also works with incremental backups.
Now we solve it by creating different (forever forward incremental) backup jobs with the same VM's and schedule one Job daily, one weekly, another monthly, ... But that's not the ideal solution. I hope GFS will be adjusted so it can also works with incremental backups.
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: GFS for primary backup jobs
Since it's small business with relatively small backup files (supposedly), haven't you thought about stuffing a server with bunch directly attached disks and use Windows 2012 R2 Deduplication? Thanks.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: GFS for primary backup jobs
There is not much sense having jobs to run on quarterly or weekly basis withlong incremental chain since there will be lots of changes over quarter/month period and size of incremental files will be much bigger than if you run backups daily.toon.v10 wrote:For our small business customers the biggest problem with GFS is that it always creates a full backup file. If the want to store several weekly, monthly and yearly backups they need lots of data. And they don't have the budget for a dedup appliance.
Now we solve it by creating different (forever forward incremental) backup jobs with the same VM's and schedule one Job daily, one weekly, another monthly, ... But that's not the ideal solution. I hope GFS will be adjusted so it can also works with incremental backups.
-
- Novice
- Posts: 6
- Liked: 3 times
- Joined: Oct 12, 2013 4:10 am
- Full Name: Toon Vandendriessche
- Contact:
Re: GFS for primary backup jobs
Hi, we do that for some customers and it is indeed a great and affordable solution, but in Belgium many small businesses are really 'small' and don't want to invest in a separate server. They just use a nas as repository.v.Eremin wrote:Since it's small business with relatively small backup files (supposedly), haven't you thought about stuffing a server with bunch directly attached disks and use Windows 2012 R2 Deduplication? Thanks.
Ok, if i need a rentention of 3 months or so i can cover it with 90 daily backups, but if they want for instance 14 daily, 12 weekly, 12 monthly backups, it's difficult to create that with one job. Backup Copy Jobs with GFS works great for that scenario but for customers that doesn't have a dedup solution, it just takes a lot of space.Shestakov wrote:There is not much sense having jobs to run on quarterly or weekly basis withlong incremental chain since there will be lots of changes over quarter/month period and size of incremental files will be much bigger than if you run backups daily.
For me Veeam is the greatest backup solution i've ever used and it would be even greater if i could have a GFS retention without the need of storing many full backups.
Thanks for you answers!
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: GFS for primary backup jobs
It should not necessarily be a brand new server, some decommissioned one stuffed with bunch of disks would be also a great choice.Hi, we do that for some customers and it is indeed a great and affordable solution, but in Belgium many small businesses are really 'small' and don't want to invest in a separate server.
Though, I get what you're trying to say.
Thanks for sharing your feedback.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: GFS for primary backup jobs
That makes sense. Note that if you schedule backup copy GFS job to run say weekly on Sunday and monthly on the last Sunday, you will have less job runs and less backup files. Same backup will be marked as "weekly" and "monthly" and let you save repository space.toon.v10 wrote:Ok, if i need a rentention of 3 months or so i can cover it with 90 daily backups, but if they want for instance 14 daily, 12 weekly, 12 monthly backups, it's difficult to create that with one job. Backup Copy Jobs with GFS works great for that scenario but for customers that doesn't have a dedup solution, it just takes a lot of space.
Nevertheless, your request about GFS in primary backup jobs is taken into account.
-
- Service Provider
- Posts: 111
- Liked: 21 times
- Joined: Dec 22, 2011 9:12 am
- Full Name: Marcel
- Location: Lucerne, Switzerland
- Contact:
Re: GFS for primary backup jobs
+1 feature request.
I understand the point "And yes, at the same time this also helps to protect inexperienced backup admins from sticking with a single copy of backups."
But whats about the experienced ones with an need of it?
I understand the point "And yes, at the same time this also helps to protect inexperienced backup admins from sticking with a single copy of backups."
But whats about the experienced ones with an need of it?
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: GFS for primary backup jobs
Marcel,
Request from experienced admins is even more than +1
Thanks for it!
Request from experienced admins is even more than +1
Thanks for it!
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Mar 24, 2011 2:33 pm
- Full Name: Jason Lehman
- Contact:
[MERGED] Feature Request - GFS to local backups...
The hardest thing for me coming off traditional backup software, like backup exec; was how to configure GFS.
I've been working w/ Veeam support for 2 weeks (testing time included here) on how to setup my backups.
There is no easy solution. I had to create 3 separate jobs for my daily, weekly, & monthly retention; per support.
It just seems like this should have been put in the original version of Veeam to me, I was shocked that it is this difficult.
I was told by Veeam support to make the feature request to "add GFS to local backups." So here I am.
I've been working w/ Veeam support for 2 weeks (testing time included here) on how to setup my backups.
There is no easy solution. I had to create 3 separate jobs for my daily, weekly, & monthly retention; per support.
It just seems like this should have been put in the original version of Veeam to me, I was shocked that it is this difficult.
I was told by Veeam support to make the feature request to "add GFS to local backups." So here I am.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: GFS for primary backup jobs
Thanks for the request, Jason. You can find some argumentation in the thread above.
-
- Veteran
- Posts: 465
- Liked: 136 times
- Joined: Jul 16, 2015 1:31 pm
- Full Name: Marc K
- Contact:
Re: GFS for primary backup jobs
My hopes for this being included in 9.5 are fading away...
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: GFS for primary backup jobs
Right, v9.5 will not address this request.
-
- Enthusiast
- Posts: 35
- Liked: 2 times
- Joined: Jan 20, 2015 12:08 pm
- Full Name: Blake Forslund
- Contact:
Re: GFS for primary backup jobs
Come on VEEAM! Please include it at some point? I'm tired of scripting it or manually copying vbk's. Your method isn't efficient from SPACE or Disk IO
-
- Lurker
- Posts: 1
- Liked: 1 time
- Joined: Oct 01, 2013 2:10 pm
- Full Name: Andre Werner
- Contact:
[MERGED] GFS for regular Disk-Backup-Jobs
Hey there,
it would be a great advantage, when GFS-options (like in Disk-Copy-Jobs) would be availeable also in disk-Backup-jobs. Actually we write our disk backups in a DataDomain (DD2500) with good performance. In a second step we read the data from this DD2500 (very slowely) just to backup them again with GFS-settings in a backup-copy-job. The Idea is, to have weekly, monthly or older backups direkt on disk instead on tape.
best regards
André
it would be a great advantage, when GFS-options (like in Disk-Copy-Jobs) would be availeable also in disk-Backup-jobs. Actually we write our disk backups in a DataDomain (DD2500) with good performance. In a second step we read the data from this DD2500 (very slowely) just to backup them again with GFS-settings in a backup-copy-job. The Idea is, to have weekly, monthly or older backups direkt on disk instead on tape.
best regards
André
-
- Influencer
- Posts: 20
- Liked: 4 times
- Joined: Jan 12, 2017 7:06 pm
- Contact:
Re: GFS for primary backup jobs
New Veeam customer here...
So is this just plain not going to be considered as a feature request? I keep seeing "you should keep two or more copies of your data anyway", but I don't see how that is relevant to making primary copies more efficient.
I'll give an example scenario, and maybe someone could explain the best solution.
Production Data lives at location A. 50TB disk array at location B. Tape library at location C.
We'd like to keep 30 days of backups, 4 weekly, 12 monthly, and 3 yearly. My only option for my primary job @ location B is for the 30 days of backups, correct? I have no way to tell it to retain weekly, monthly, yearly backups here? I'd love to be able to do that, since I don't want to only have those on tape. I'd love to have them on disk as well, both for restore speed and extra redundancy. Why am I being prevented from doing that? How is that in the best interest of protecting me from myself?
I've seen the advice to create a separate folder on the disk array at location B and do GFS copies to that. This works, but is a total waste of space because it doesn't appear to utilize the REFS 3.1 fast clone capability like it does for Synthetic Fulls.
So my choices appear to be:
1. Find enough disk space at location A to create a minimal (7 day) primary backup so that location B can finally have GFS copies. (Total waste of space and bandwidth)
2. Do GFS copies on the disk array at location B (requires infinitely more space, since REFS fast clone is not being used)
3. Only have GFS copies live on tape at location C (much slower restore speed and limits me to only one copy of GFS)
Is there a better solution that I'm not seeing?
Thanks!
- Eric
So is this just plain not going to be considered as a feature request? I keep seeing "you should keep two or more copies of your data anyway", but I don't see how that is relevant to making primary copies more efficient.
I'll give an example scenario, and maybe someone could explain the best solution.
Production Data lives at location A. 50TB disk array at location B. Tape library at location C.
We'd like to keep 30 days of backups, 4 weekly, 12 monthly, and 3 yearly. My only option for my primary job @ location B is for the 30 days of backups, correct? I have no way to tell it to retain weekly, monthly, yearly backups here? I'd love to be able to do that, since I don't want to only have those on tape. I'd love to have them on disk as well, both for restore speed and extra redundancy. Why am I being prevented from doing that? How is that in the best interest of protecting me from myself?
I've seen the advice to create a separate folder on the disk array at location B and do GFS copies to that. This works, but is a total waste of space because it doesn't appear to utilize the REFS 3.1 fast clone capability like it does for Synthetic Fulls.
So my choices appear to be:
1. Find enough disk space at location A to create a minimal (7 day) primary backup so that location B can finally have GFS copies. (Total waste of space and bandwidth)
2. Do GFS copies on the disk array at location B (requires infinitely more space, since REFS fast clone is not being used)
3. Only have GFS copies live on tape at location C (much slower restore speed and limits me to only one copy of GFS)
Is there a better solution that I'm not seeing?
Thanks!
- Eric
-
- Veeam Software
- Posts: 2097
- Liked: 310 times
- Joined: Nov 17, 2015 2:38 am
- Full Name: Joe Marton
- Location: Chicago, IL
- Contact:
Re: GFS for primary backup jobs
In your case, you could theoretically use ReFS still and leverage the new block clone feature in ReFS 3.1. If you leverage that 50 TB disk array and make your primary repo Server 2016 formatting the disk on that array as ReFS, then creating a second folder for a second repo in order to use a BCJ with GFS would still have the ReFS advantages for all the GFS restore points. After all, each of those is a synthetic full. If you've already formatted your primary repo as NTFS or if it's not even Server 2016, then you'd have to create an additional LUN which is attached to a server running 2016 and then use ReFS.
Joe
Joe
-
- Influencer
- Posts: 20
- Liked: 4 times
- Joined: Jan 12, 2017 7:06 pm
- Contact:
Re: GFS for primary backup jobs
Thanks for the reply, Joe. We do have the 50TB array formatted as ReFS 3.1, and it works amazingly well for the primary backups. Synthetic fulls are created very fast and take up very little space. However, it seems when I do my first backup copy to a different folder on the same volume, it creates a full copy of all of the data. I have a 5TB job, and if I do a backup copy of that job, it seems to make a bit-for-bit copy and I lose 5TB of free space on disk. I haven't tested yet, but my hunch if that subsequent backup copies will leverage that initial backup copy and take up little space (since the 9.5 documentation says fast clone works on backup copies), but it still requires me to use up 10TB of space for the one 5TB job. It seems since I'm "tricking" Veeam into thinking my second repository is on a different disk, it doesn't bother to attempt to use ReFS to clone the primary copy into a backup copy. Or maybe I have something misconfigured? I will know more after a few more backup copies run.
Thanks,
Eric
Thanks,
Eric
-
- Veeam Software
- Posts: 2097
- Liked: 310 times
- Joined: Nov 17, 2015 2:38 am
- Full Name: Joe Marton
- Location: Chicago, IL
- Contact:
Re: GFS for primary backup jobs
You are spot on. The BCJ will initially create an additional copy of all the data (which in some regards isn't necessarily a bad thing), but any GFS points created will be synthetic fulls that should fully leverage block clone within ReFS. So there's some additional space used but in the end you'll still be getting the benefits of ReFS, and with the 19 GFS retention points (4 weeklies + 12 monthlies + 3 yearlies) you'll still have immense space savings.
Joe
Joe
-
- Veteran
- Posts: 465
- Liked: 136 times
- Joined: Jul 16, 2015 1:31 pm
- Full Name: Marc K
- Contact:
Re: GFS for primary backup jobs
It makes sense that ReFS 3.1 goodness would not apply to backup copies. One option is to ditch ReFS and use NTFS. Then you could turn on NTFS deduplication which would dedup the backup copies against the primary jobs. But, I don't know if I would consider that a "better" solution. There would be a lot lost in moving away from ReFS.
I think the ReFS 3.1 cloning functionality actually adds to the case to have GFS available to primary jobs. Why force someone to use double the repository space just to keep long-term backups at the primary site?
Keeping long term backups off-site, when both the primary and secondary repositories are disk based, actually seems counter-intuitive to me. My use for long term backups is to handle the restore request of "I accidentally deleted a file 2 months ago and am just now getting around to asking about it." If I need to go to the off-site backup for DR, I'm going to want to restore from a recent backup.
I think the ReFS 3.1 cloning functionality actually adds to the case to have GFS available to primary jobs. Why force someone to use double the repository space just to keep long-term backups at the primary site?
Keeping long term backups off-site, when both the primary and secondary repositories are disk based, actually seems counter-intuitive to me. My use for long term backups is to handle the restore request of "I accidentally deleted a file 2 months ago and am just now getting around to asking about it." If I need to go to the off-site backup for DR, I'm going to want to restore from a recent backup.
-
- Veeam Software
- Posts: 2097
- Liked: 310 times
- Joined: Nov 17, 2015 2:38 am
- Full Name: Joe Marton
- Location: Chicago, IL
- Contact:
Re: GFS for primary backup jobs
I would think the opposite: ReFS builds the case of having GFS in copy jobs. The reason being is that if the GFS restore points were in the backup job, all of those are pointers to the same set of blocks as the regular backup. If you run into a problem all those GFS restore points are worthless. But if you have an additional copy of all those blocks, which BCJ requires, then you're less likely to lose all your GFS restore points.
This doesn't remove the overall use case of having GFS in primary backup jobs. I can see reasons for it and understand the feature request. I just don't think ReFS is a reason.
Joe
This doesn't remove the overall use case of having GFS in primary backup jobs. I can see reasons for it and understand the feature request. I just don't think ReFS is a reason.
Joe
-
- Influencer
- Posts: 20
- Liked: 4 times
- Joined: Jan 12, 2017 7:06 pm
- Contact:
Re: GFS for primary backup jobs
I do agree that having separate copies of the data is better, and once we get a second disk array to replace tapes at our third location, I won't care as much about this issue.
However, I do think ReFS makes a stronger case for GFS in primary backups. In the past, it may have been too slow or would consume too much disk space to keep 30 daily, 4 weekly, 12 monthly, and 3 yearly at the primary site. So, we'd keep 30 dailies on primary disk, and the GFS at a secondary site on tape.
With ReFS, keeping the GFS copies on site is very fast and uses very little space. So now it's given me the desire to have GFS at the primary and the backup site. This way, there's nothing lost if a single backup location goes away. Plus, our primary backup repo is much faster than our secondary (tape) repo, so for quick restores I'd love to have them also available on disk. Maybe you mean that this was already possible somehow before ReFS, but for me that was the trigger that made me wish for it.
I understand that you want to encourage customers to keep at least two copies of backup data, and so it's not currently allowed. But it would be nice if it could at least be enabled through a registry key after some kind of warning or disclaimer.
However, I do think ReFS makes a stronger case for GFS in primary backups. In the past, it may have been too slow or would consume too much disk space to keep 30 daily, 4 weekly, 12 monthly, and 3 yearly at the primary site. So, we'd keep 30 dailies on primary disk, and the GFS at a secondary site on tape.
With ReFS, keeping the GFS copies on site is very fast and uses very little space. So now it's given me the desire to have GFS at the primary and the backup site. This way, there's nothing lost if a single backup location goes away. Plus, our primary backup repo is much faster than our secondary (tape) repo, so for quick restores I'd love to have them also available on disk. Maybe you mean that this was already possible somehow before ReFS, but for me that was the trigger that made me wish for it.
I understand that you want to encourage customers to keep at least two copies of backup data, and so it's not currently allowed. But it would be nice if it could at least be enabled through a registry key after some kind of warning or disclaimer.
-
- Influencer
- Posts: 14
- Liked: 2 times
- Joined: Feb 02, 2017 2:13 pm
- Full Name: JC
- Contact:
Re: GFS for primary backup jobs
I agree - just coming from Netbackup, I'm finding setting GFS style retention for things difficult. In Netbackup, I'd have something like 30 days of daily, and weekly for a year on my primary, all copied to my DR repository. Now that I've already created a primary site repo with my free SAN space with 2016 ReFS, I'm finding it annoying trying to get the right retention and keeping the ReFS benefits of space savings.
-
- Influencer
- Posts: 11
- Liked: never
- Joined: Apr 05, 2013 5:14 pm
- Full Name: JOsh Gfeller
- Contact:
[MERGED] Question about archive
Hello, I know you can create archive points from backup copies for monthly, quarterly, and yearly. However, if I want to keep some of these longer range archival points on my main repository that is also running my regular backup jobs can you set that somewhere in the backup job instead of a backup copy job? If I try to setup a backup copy job with archival and point it to the same repository it won't allow it.
-
- Expert
- Posts: 113
- Liked: 16 times
- Joined: Jun 06, 2014 2:45 pm
- Full Name: csinetops
- Contact:
Re: Question about archive
Nope. You have to backup copy them to an alternate repository. This is by design, you don't really want long term retention points on your main repository. You can manually or script to copy the files to alternate/removable media for long term retention too. This is what I used to do before getting my AltaVault appliance for long term retention.
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: GFS for primary backup jobs
We kind of try to impose the best practices, according to which storing two copies of the same data within the same location is not a good thing. But, as mentioned above, if you feel safe, create additional folder on the same device, assign repository role to it and point backup copy job to it. Thanks.However, if I want to keep some of these longer range archival points on my main repository that is also running my regular backup jobs can you set that somewhere in the backup job instead of a backup copy job?
-
- Enthusiast
- Posts: 68
- Liked: 5 times
- Joined: Aug 28, 2015 12:40 pm
- Full Name: tntteam
- Contact:
[MERGED] Any plan to make GFS retention available on regular
Hi there,
Is there any plan to let us have a GFS like retention in regular backup chains ?
ATM we are using backup copy jobs, but it's just a complete waste of time and resources since our backups and backups copy are on the same storage array.
Example :
We have like 1 month rollback requirement with 1 point / week, but only 7 days rollback requirement for daily backups. We would love being able to have 7 restore points + 4 weekly restore points, without having to waste time copying files.
I forgot to mention that 90% of problems we got with veeam are from backup copy jobs (retention = R = never deleted file, or vib files sometimes present whereas only full is selected, loss of sync = need to recreate backup copy jobs, etc... Anyone using backup copy jobs know what I mean).
Is there any plan to include GFS retention in regular backup chains ?
Where can I submit this feature request officially as a customer that pay its licences ?
Thanks
Is there any plan to let us have a GFS like retention in regular backup chains ?
ATM we are using backup copy jobs, but it's just a complete waste of time and resources since our backups and backups copy are on the same storage array.
Example :
We have like 1 month rollback requirement with 1 point / week, but only 7 days rollback requirement for daily backups. We would love being able to have 7 restore points + 4 weekly restore points, without having to waste time copying files.
I forgot to mention that 90% of problems we got with veeam are from backup copy jobs (retention = R = never deleted file, or vib files sometimes present whereas only full is selected, loss of sync = need to recreate backup copy jobs, etc... Anyone using backup copy jobs know what I mean).
Is there any plan to include GFS retention in regular backup chains ?
Where can I submit this feature request officially as a customer that pay its licences ?
Thanks
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: GFS for primary backup jobs
All feature requests submitted through the forums are "officially" accepted by the product management team, so thank you for this feedback.
-
- Influencer
- Posts: 14
- Liked: 2 times
- Joined: Feb 02, 2017 2:13 pm
- Full Name: JC
- Contact:
Re: GFS for primary backup jobs
I'm curious about the ReFS users here who have done a setup where they have the backup copy GFS set up on the primary and secondary sites. What does that look like?
On your primary site, do you have something like E:\backups and E:\GFS
Then you have your normal backup jobs - what's typical for your backup jobs? DO you do a small number (or one) number of retention points and handle everything else as BC? Or do you primarily use backup jobs for primary and then only bring in BC GFS for jobs that require it on primary?
I'm about to try to structure my infrastructure. Ideally mirror everything on primary and secondary. ReFS on primary and secondary. Primary storage has Dedupe + Compression on the array, secondary has Compression on the array.
Should I create E:\backups, do retention of maybe 1 or 2 points for 'staging', and then use backup copy jobs to replicate this to the daily and GFS rentention I want to E:\GFS (and secondary site S:\GFS)?
I'm just curious how other people are structuring this thing before I start designing policies. I'm lucky to have dedup inline on my storage for primary so a 'staging' area won't lose me that much space.
Thanks
On your primary site, do you have something like E:\backups and E:\GFS
Then you have your normal backup jobs - what's typical for your backup jobs? DO you do a small number (or one) number of retention points and handle everything else as BC? Or do you primarily use backup jobs for primary and then only bring in BC GFS for jobs that require it on primary?
I'm about to try to structure my infrastructure. Ideally mirror everything on primary and secondary. ReFS on primary and secondary. Primary storage has Dedupe + Compression on the array, secondary has Compression on the array.
Should I create E:\backups, do retention of maybe 1 or 2 points for 'staging', and then use backup copy jobs to replicate this to the daily and GFS rentention I want to E:\GFS (and secondary site S:\GFS)?
I'm just curious how other people are structuring this thing before I start designing policies. I'm lucky to have dedup inline on my storage for primary so a 'staging' area won't lose me that much space.
Thanks
-
- Influencer
- Posts: 20
- Liked: 4 times
- Joined: Jan 12, 2017 7:06 pm
- Contact:
Re: GFS for primary backup jobs
Jimmy -
We have our primary backups on E: and our GFS on F: - only because we had reached our array's max volume size for E:. Otherwise yes, just two different folders on the same drive would be essentially the same thing.
We are still tweaking things to fit into our lack of disk space until summer when we hope to expand, but currently we keep 30 restore points (30 days) on E:. Then we backup copy everything to F:, where we keep 2 daily restore points (the minimum allowed), and retain 12 monthly backups as well. The bummer, of course, is that we have to duplicate our entire environment on F: just to retain the monthly backups we need. What's frustrating is that this isn't a technical hurdle, or an ReFS shortcoming - it's just Veeam's code not allowing GFS retention alongside primary backups. A registry key to override this would be nice. We don't need this safeguard since our primary backups are already off-site, and again to tape at a third site. I'd love to efficiently retain a big timespan of backups on disk for quick and easy restores. No worries about all our eggs in one disk array, since all the same data is off to tape elsewhere.
But yes - you are lucky to have inline dedupe since you won't be nearly as burdened by having a second full copy of your data on the same array.
We have our primary backups on E: and our GFS on F: - only because we had reached our array's max volume size for E:. Otherwise yes, just two different folders on the same drive would be essentially the same thing.
We are still tweaking things to fit into our lack of disk space until summer when we hope to expand, but currently we keep 30 restore points (30 days) on E:. Then we backup copy everything to F:, where we keep 2 daily restore points (the minimum allowed), and retain 12 monthly backups as well. The bummer, of course, is that we have to duplicate our entire environment on F: just to retain the monthly backups we need. What's frustrating is that this isn't a technical hurdle, or an ReFS shortcoming - it's just Veeam's code not allowing GFS retention alongside primary backups. A registry key to override this would be nice. We don't need this safeguard since our primary backups are already off-site, and again to tape at a third site. I'd love to efficiently retain a big timespan of backups on disk for quick and easy restores. No worries about all our eggs in one disk array, since all the same data is off to tape elsewhere.
But yes - you are lucky to have inline dedupe since you won't be nearly as burdened by having a second full copy of your data on the same array.
-
- Influencer
- Posts: 14
- Liked: 2 times
- Joined: Feb 02, 2017 2:13 pm
- Full Name: JC
- Contact:
Re: GFS for primary backup jobs
Thanks Eric, that helps.
Since I only need GFS on select servers, I'll set up something similar - using standard backup to (PRIMARY)R:\Backup, using backup copy to (DR)R:\Backup without GFS for most jobs, and for jobs needing GFS, a standard backup with needed daily rentenion, plus a GFS job with two required retentions plus GFS rules to (PRIMARY)R:\GFS and a separate backup copy job with the daily retention setting from the backup job plus the GFS settings going to (DR)R:\GFS.
Server 1 (2 weeks daily only)
Backup to (Production)R:\Backup
Backup Copy (same 14 points) to (DR)R:\Backup
Server 2 (2 weeks daily, Monthly for a Year, Yearly for 7 years)
Backup to (PRODUCTION)R:\Backup, 14 points
Backup copy to (PRODUCTION)R:\GFS, 2 points, monthly for year, yearly for 7
Backup copy to (DR)R:\GFS, 14 points, monthly for year, yearly for 7
A bit sloppy but I think I can make it work, based on my thinking (although I'm not as familiar with the system as you already using it). It should take care of ReFS waste on (PRODUCTION) with inline dedup, and the setup should cause the ReFS chains to fully map together in my (DR) site which does not have incline dedup, only compression.
Since I only need GFS on select servers, I'll set up something similar - using standard backup to (PRIMARY)R:\Backup, using backup copy to (DR)R:\Backup without GFS for most jobs, and for jobs needing GFS, a standard backup with needed daily rentenion, plus a GFS job with two required retentions plus GFS rules to (PRIMARY)R:\GFS and a separate backup copy job with the daily retention setting from the backup job plus the GFS settings going to (DR)R:\GFS.
Server 1 (2 weeks daily only)
Backup to (Production)R:\Backup
Backup Copy (same 14 points) to (DR)R:\Backup
Server 2 (2 weeks daily, Monthly for a Year, Yearly for 7 years)
Backup to (PRODUCTION)R:\Backup, 14 points
Backup copy to (PRODUCTION)R:\GFS, 2 points, monthly for year, yearly for 7
Backup copy to (DR)R:\GFS, 14 points, monthly for year, yearly for 7
A bit sloppy but I think I can make it work, based on my thinking (although I'm not as familiar with the system as you already using it). It should take care of ReFS waste on (PRODUCTION) with inline dedup, and the setup should cause the ReFS chains to fully map together in my (DR) site which does not have incline dedup, only compression.
Who is online
Users browsing this forum: No registered users and 125 guests