The basics are this:
- I have 3 Windows Agent Backup jobs that backup unique folders to a NAS device from 3 servers (Srv1 - Srv3).
- I have a Windows Backup Copy job ('FilesToCloud') that uses those 3 jobs as 'Objects' and backs up to a Scale Out Repository (SOBR) for my cloud.
- I want to keep 6 months of full backups on my NAS, and 12 months + 5 annuals on my cloud (configured with GFS on jobs)
\\NAS\Srv1 Files\Srv1
\\NAS\FilesToCloud\Srv1_Files\Srv1
I would think that since the Windows Backup Copy job was configured to use the 3 Windows Agent Backup jobs that it would have used the results from those jobs, and not re-run them.
Question: How do I eliminate this redundant storage and processing time?
You may wonder:
Why do I have 3 file backup jobs? Because each server has unique folders to backup, and I prefer not to receive warnings about missing folders when I put all 3 in a single job.
Why am I backing up folders, not volumes? Because the volumes have terabytes of static data backed up elsewhere and the actual dynamic data is small.
Why do I have a Backup Copy job, why not backup files to SOBR? Because Veeam said to do so since I wanted different GFS lengths on NAS vs. Cloud.
Thanks in advance for any suggestions.