We want to back up one of our branches (vSphere) to another one using a Site-To-Site IPSec tunnel (60ms latency, 50Mbit/s symmetrical -> Theoretical maximal 22GB/hour).
The remote branch has a space problem so we are very limited on the hardware we can deploy there. We managed to deploy a Synology system, with its pros and contras, but it is x64 and has 16GB ECC RAM installed, which allows us to deploy VMs there.
One of the workloads to protect is a 10TB file server (data is distributed on 5 virtual Disks). I most probably will split that server on smaller ones to gain in parallelism. The total data to protect is something around 11-12TB and the change rate should be low.
We use forever incremental, 30 days. No periodic active fulls, no synthetic fulls. We want to do daily backup copies to a different continent (200ms latency).
We have, as I see, several options:
- Share the storage as NFS and use as an NFS repository using the branch's Windows proxy as gateway/mount server(This is what we are trying now)
- Share the storage as iSCSI, mount a remote disk on the Windows Proxy and create an REFS repo on it
- Deploy a Linux VM on the Synology and use it as a Linux Repo with XFS
REFS via iSCSI seems sexy, but I do not know how good or bad iSCSI will behave (60ms latency) or how bad will it be to recover the file system may the internet link fail while mounted.
Deploying a Linux VM in a Synology is not standard to us. This will be a new Hypervisor to maintain (Pets vs Cattle), but XFS is a good advantage and I suspect, Veeam will be able to saturate the link using multiple threads. This is our possible B-Plan.
Do you have performance experience using remote storage? Another suggestion?
Best regards
Seve