Background:
a friend and I have just started a new hosting company. We are using latest and greatest tech, namely 10Gb Arista networking, Dell R620 servers, EMC VNX storage (iSCSI) and vSphere 5.1 / vCloud Director 5.1. One of the last things to do was to implement a decent backup solution and Veeam was No1 on the list since we used it at my last company.
So - onto the Veeam chat

I am looking to hear from anyone who is running with our kind of setup, or perhaps from the friendly Veeam staff who will be knowledgeable about these matters. My question is a general one, around the design of an optimised Veeam setup that is also scalable.
For our backup we have built a couple of 24TB storage boxes (supermicro boards, 24 x 1TB 2.5", 2 x 10Gb NIC that kind of thing) and our original plan was that our mass storage would be presented to Veeam (and other hosts) via iSCSI or NFS. However since evaluating Veeam I am wondering if that isn't the best approach.
Currently, I have Veeam B&R installed directly onto one of these big storage boxes. I'm running Server 2008 R2, using the Microsoft iSCSI initiator with Microsoft MPIO feature to get multipathing. All my tests show that iSCSI/multipathing is optimal as traffic is balanced perfectly evenly over the two 10Gb NICs which go off to the EMC VNX. I can see my VMFS LUNS in disk manager just fine. Veeam is configured to use local storage for the Repository.
Veeam is communicating with my vCenter server and pulling the data off the SAN directly, as verified by looking at the iSCSI NIC traffic. No-where does my statistics/report show that it is definitely using direct SAN, however its definitely using the iSCSI NICs and I set the Backup Proxy to "Direct SAN only", no errors.
Based on the very successful testing and massive throughput rates I've achieved, my conclusion is "go with this" - however I note that Veeam created the idea of using Backup Proxies in order to provide a scalable solution for large environments - I'm worried that our solution won't scale, but it is surely the most efficient? I don't want to go creating VMs that can "see" our SAN at iSCSI layer and pull that iSCSI traffic up through VMware as VM traffic.......I'm also a little confused about the Repository - can a single repository be seen by numerous Proxies or do they all need their own (whether local or mounted via iSCSI).
Currently my thought is that this is about as fast/efficient as it could get, and if we ran into trouble with scale I'd simply add another entire physical server running B&R/Repository/SQL. So a sideways scale out.
Anyway, look forward to hearing any comments and I'm perfectly happy sharing any knowledge that we've gained on vSphere 5.1 / EMC VNX / iSCSI multipathing / 10Gb etc.
cheers
Lee.