I've some questions concerning choosing the best method to archive an efficient backup infrastructure.
Our setup is as follow:
- Veeam B&R + Veeam Enterprise manager on the same VM
- Veeam SQL DB on a different VM
- Production storage Netapp, backup storage Netapp as well (NFS for datastore - NFS and iSCSI for backup).
- Different Veeam Repository Windows and Linux
The doubt comes on the Veeam repos.
I'm introducing the scale out repo to extend the backups more in a dynamic way. The first introduction is the Backup from Storage Snapshot using NetApp. Here I've setup a physical server (Windows) with 3 connections: management, NFS to Netapp to our prodution VMs, iSCSI to our backup LUNs. Until here all works well and BfSS is great to reduce vSphere snapshot. The "problem" comes on the backup mount, in iSCSI. Netapp is known works not optimal on iSCSI and it's more proof on NFS. Even the storage usage, on NFS you have control on the content and the usage and stats are more readable. The iSCSI LUNS are a "black container" on Netapp. Even space reclamation is done via client and it's not visible on Netapp directly. The reclaim works not so good... it takes MANYMANY times and then it hangs....
So, if I move the physical Windows server to use NFS exports from Netapp instead of iSCSI, what disadvantages should I have? AFAYK the Instant Recovery needs the proxy act as NFS Server, so, the service (ports) would be exclusively used by the Veeam service, preventing to mount NFS shares on the backup Netapp. In case I DON'T use Instant recovery, it's enough to stop this service and mount an NFS share from my storage backup on this physical Proxy? Later I could then reinstall the proxy from Windows to Linux. I think it should work better.
For the other backups that are not using BfSS (due topology reasons) I could continue to use Linux Proxies with NFS shares, introducing Scale Out Repos.
What do you suggest about the scenario? Should I consider some other changes, do I missing something or seems reasonable what descrived?
Thanks a lot!!
Simon