Environment is a single vSphere 6.5 cluster (4 hosts) with about 100 VMs eating just shy of 3 TB on a Tintri array. Daily churn seems to be somewhere around 10% or so before compression/dedupe based on the test backups I've got running. Network is 10G throughout and is robust with hugepages and storage VLANs configured. VMware environment is healthy and performs well.
One of the stacks is going to be somewhat remote, so having the ability to deal with failed disks without immediate access is important.
Aside from VMware and related (Veeam) stuff, we're 99% Linux so every Windows machine we are forced to add is a Martian in terms of licensing, management, etc.
Here's the proposed Veeam stack, with decision points that I'm unsure of in red:
Veeam manager: Server 2012 R2 VM. Current test environment runs 2 cores and 8GB RAM. Enough?
Backup proxies: Server 2012 Core VMs. We need N
number, each with 2 cores and 4GB RAM(?)
Backup repo: Linux VM(s - how many?)
serving as headend for NFS/ISCSI
Filer: 24-bay FreeNas with boatloads of RAM and SSDs for slog. Spinning drives will be 4TB each and mostly likely configured as 11 2-way zvols with 2 spare disks (although we might go with 3-ways, or do we want to go with a couple 9-wide RAIDZ3 instead?)
Obviously the first goal is to get through incrementals every day with plenty of room to spare. Based on the above and the experience of the community, where will bottlenecks most likely appear?
One question of note: how many write streams would we expect to see coming in to the filer during backup runs, and where would one expect the point of diminishing returns in performance to happen as the number of streams increases?
Also, is there any movement in getting Linux-based backup proxies? Such a development would be most welcome