Sure, we have onsite a primary storage location for the Backup Job and a robocopy job at the end of the week on a different storage medium. We also have a Backup Copy job for DR purposes on an offsite location storage. Not to mention the replication part.Do you make copies of your backups?
Being able to smartly manage (with GFS) the onsite location backup is helpful first for being able to cram in more restore points (with or without dedupe) and second, for having those restore points close and with fast restore speeds compared to offsite backups.
We consider that it's not a good option to have a Backup Copy job do the GFS part on the same volume. It creates an unnecessary overhead in management and resources to reprocess restore points and extract VBKs and VIBs at each cycle. For a simple Backup Job (forward incremental), automatically deleting (according to a GFS policy) the increments and keeping the VBKs (eg: 1 per previous years, 12 in current year and 4 in the current month), creates more space with the added benefit of, like I said above, having a properly managed chain close to the restore location.
A thing to mention is that Backup Copy Jobs with the Forever Forward Incremental (I believe that is what it's called) schematic is not very good with dedupe. Constantly modifying the VBK decreases the dedupe ratio. I'll give you an example taking our two backup repositories (the onsite - simple backup job and offsite - backup copy job):
- the backup job has a restore point number set to 60 - daily and an active full on weekends
- the backup copy job has a restore point number set to 60 - every 3 days
- Onsite dedup performance: 30TB savings with 78% dedup rate
- Offite dedup performance: 4,4TB savings with 49% dedup rate
I understand that the basic backup job is not considered to be a historical backup solution but, maybe it should be. Not at the same level of complexity as the Backup Copy Job but much simpler like I mentioned above.