Thanks for stirring up the thought process. This morning, I created an iSCSI target on my NAS (as it requires an NTFS volume), and changed my Client Computer Backup properties to move that folder to that drive. That will move 105 GB out of the VM, and then will only be an ongoing local backup for those machnes,which Win 2012 ESS will handle. This could be a good situation to handle with Veeam Endpoint when it's released, but for now this will work.
Based on the above, should I see a vast decrease in size on my next Synthetic Full or will I have to execute a new Active full? Should I archive the current backup chain, and start fresh?
Now I will only use the current Backup Job, and create the two Backup Copy Jobs as stated before, one to create long term archival to the NAS local storage, and the second to send to the cloud:
Backup Job 1 (Incremental, Synthetic Fulls -Weekly, Transform, RPs = 14) ---> Complete VM (CVol) ---> Repo1 (NAS - Local backup)
Proposed (Add 2 Backup Copy Jobs)
Backup Copy Job 1 (Retain = 7, W=4, M=12, Q=0, Y=0) ---> Source = Backup Job 1 (Long term storage - NAS).
Backup Copy Job 2 (Retain = 14) ---> Source = Backup Job 2 (Keep 2 weeks in the cloud).
Does the proposed Backup Copy Jobs seem correct for my intended purpose?
I can't really wait for V8 and Cloud Connect, as my customer doesn't feel fully protected and would like to get this out to the cloud. So I hope to implement a temporary process using storage from Amazon, Google, or similar. My backup job should now see a lot less change rate and will aid in WAN transfer. When V8 is finally released I will switch over to it.
Any additional thoughts and opinions you might have are greatly appreciated.
Thanks for you input.