I've been busy with an 4 months support issue with both VMware and Veeam. We have an vCloud setup where the Veeam B&R Proxy has no direct network access to the VM's. So when doing an guest file-level restore, Veeam utilizes VIX to copy files to the quest OS. We noticed an terrible transfer speed of less then 1.5MB/s, while the backend infrastructure is GiGabit and up, so I was expecting a higher rate then this. I then started to eliminate Veeam by ding an file copy thru PowerCLI and VIX API itself. There I got the same slow transfer speed so it wans't Veeam wo whas guilty. From that point, together with Veeam we contacted the SDK support of VMware. We first asked, how fast should VIX file transfer be as there was no refenrece to be found on any VMware document nor any blog. VMware couldn't anwser the question themselves, and they started an lab to reproduce. They also found out the transfer speed can't get higher then 1.5MB/s, so they escalted the issue to development. There they confirmed it's by design, it has something to do with the logic of VIX, see below for an explination.
So why am I posting this: Because the only way to get VMware to change this by desgin behaviour, we need to send a feature request, and not one, but a lot of them. So when you are in the situation where you have to use VIX guest file level restore;
- You have vCloud and isolated networks (vShield/VXLAN)
- You have an DMZ with VM's in it
- Other setup where Veeam B&R has no network level access to the VM's
Please submit an feature request here: http://www.vmware.com/contact/contactus ... od_request
You could refer to VMware ticket number 15823280612.
How VIX currently works:
1) create an async operation and put it into a queue
2) the thread polling that queue get the operation and process it
3) then, send it through async socket, this would put the operation into another queue
4) vmx thread polls the queue and process the operation, send it to VM
5) VM gives a reply to that opertion, then goto 1)
Let's make VMware change this asap