Im looking for a bit of advise on a ground up build for VMware / Veeam / SAN backup infrastructure.
As a bit of background, we are currently running VMware clusters, Veeam 9.5 u3 backups (reverse incs) with a Veeam B&R server VM, and a Veeam Proxy physical server which is directly connected to the VM cluster storage via FC/iSCSI for reading data. The Veeam Backup Proxy server also has a seperate SAN which is exclusively used for Veeam backups, and nothing else. These typically have 4-8 spindles although only 7200RPM Nearline, ranging in 4-20TB in capacity depending on cluster/site. Backup jobs are configured as App aware, and reverse incrementals, with mostly all other defaults left untouched.
I have the option to rebuild the backup infrastructure from the ground up, and with it build a best practice around our deployments of Veeam and related backup storage, so would like to gather informed opinions on how build the best solution available. My goal is to optimally tune the SAN disk groups / stripe size to best match Veeam application requirements, and also tune Veeam to make best use of storage etc, as well as selecting optimal approach to backups within Veeam and any associated settings within jobs.
I plan to retain the VM based B&R Server to manage jobs, as well as the physical proxy server to leverage heavy lifting.
The connectivity between the Veeam B&R Server & Veeam Proxy Physical server is 1GB Ethernet.
Im thinking of setting the VM backups job, Storage optimisation to best suit storage segment/stripe sizing.
Assuming the following optimisation settings (ignoring deduplication tuning)
Storage OptimisationResulting Block Size
LAN target512KB
WAN target256KB
Local Target1024KB
16TB+ 8MB
Would I be better configuring jobs with 1MB (local target) and SAN stripe widths at or above 1MB?
Any input on this would be appreciated, Im looking for the most optimal settings for throughput.
On a seperate point, with regards to tuning the time taken to complete the act of capturing the backup data from the servers, a forward incremental would be best placed?
I always preferred reverse incs, historically, however after various restores of VMs from the latest reverse, and from older points on reverse backups, I see these take a little longer to restore older points, but never to any point where I find this a problem, so an genuinely considering forwards, moving forward - no pun intended

I dont have any issue with Veeam performing maintenance after the act of getting backup data, as this leaves VMs and that storage untouched, thus unimpacted after that point.
Im also considering GFS assuming I can keep all this on disk, with 7 days, 4 weeks, 12 monthly.
again this would depend on all of the above hardware with no tape etc.
Ill leave out any reference to number of restore points and secondary backups to DR / tape etc. suffice to say, I anticipate 20-30 restore points (disk) depending on site.
I want to get the best possible tuning and performance possible here. And yes, I know more spindles would be better, where possible that will happen, but Im not lucky enough to have unlimited budget

If you spot any obvious errors here please let me know.
Many thanks
Owen