I have also recenctly setup another Veeam Backup Copy target on a third location (on a QNAP NAS). This also works brilliantly (although getting the QNAP to behave perfectly as a Linux Repository was quite the pain in the ass! But once configured it works super, and just like intended).
Now here comes my real troubles that I hope someone can help find a solution, or a direction to...
The setup:
In the time evaluated, and started using Veeam Backup, i dumped all backups into one repository on a NAS server. This worked fine. I then segmented the backup job into different jobs. I now have four (4) different jobs scheduled to run at different interalvals, according to how important the VM is to the infrastructure of the company (Some takes copies every third hour, some once a day, some only once a week etc.). This also works without any problems - fantastic even! (Did i mention, that in Veeam software i just include the VM groups i have setup in vSphere, so whenever i add another VM to a group in vSphere it will auto be backed up by the Veeam job holding that group for VM to backup - cool feature!).
The problem:
Now, this have been working our very nicely, and everyting is working right as expected.
BUT i have a problem, that primary Veeam Repository is running out of space. Not so much of a problem in it self, as i have more disk to put in the setup. if needed. But to be hournest. Right now were just keeping everything forever. And yes, we do indeed need ko keep things (Not forever but for at least five (5) years) - BUT we dont need to keep all and everything in between.
So when I started exploring Veeam Backup Copy, and looked into the rentention policies i was very happy. This is now what were running to our third replication site. (Running three of the backup jogs into one big CopyJobRepository, and the last has its own Copy Job). Said again this works, and everything i super cool!
BUT Veeam Copy Jobs only copy NEW data to the offsite backup...

So NOW comes the question!:
I now have these four Backup Jobs that spans 1 1/2 year back in time on my primary storage. That is a lot of data to keep around (about 14 TB right now).
- Is there any way i can enforce GFS retention on them? (The backups in my main target repository)
- Or is there any way i can make a Backup Copy Job that also looks at old data?
- Or can i do some crazy thing and seed all of the four (4) backup jobs to ONE Backup Job Destination, and it will all be good with the next BackupCopyJob?
So, question is. How can I keep all my old backups (not all, but by retention rules). Or move everyting to a backup copy? (Including backups prior to the Backup Copy Job started?)
Hope someone has some insight or advice I just did not think of.
Sincerely
Martin Damgaard