Now that I've switched to ReFS for my primary backup target, I would absolutely love to be able to fully leverage ReFS block cloning to keep historical backups at our primary site. We already have offsite backups as one should. However, It would be convenient to have a GFS chain at our primary site, if nothing else for the occasional email or file restore from 6 months ago that a user might request. The problem I'm running into is that, in order to keep GFS backups on at our primary site, I have to create a second repo on that server, set up a backup copy job and then run that. Because block cloning can't be leveraged across jobs, that means the base disk space requirements are doubled for us. In other words, let's say an active full of all our VMs would take up 10TB, well that doubles to 20TB in order to use backup copy jobs that can then take advantage of block cloning for longer term GFS purposes. Suddenly the whole notion of "spaceless GFS backups" becomes decidedly less spaceless, at least initially. This is a perfect scenario where having either GFS capability within a normal job, or the ability to link backup chains between a normal and copy job in such a manner where the copy job could leverage block cloning based on the primary job would be ideal. We're a pretty small shop and are looking for ways to be more efficient with storage wherever we can.
I realize this might be abused, but it could be somewhat mitigated with some friendly warnings, or something like that. Again, this has never really been something that mattered to me, but now that I'm utilizing ReFS, I'm realizing how much space I could be saving on the base disk requirements of a backup copy job.