as NBD always seems a bit slow i have something to share with you

Just a nice anecdote i discovered just today. We recently upgraded our vmware environment from enterprise to enterprise plus. So i thought to myself...yummy...let´s enable sioc to your datastores and check out NBD with veeam b+r after that.
Now, to explain: I use a pure 10 GB iSCSI network, 10 GB on the ESX machines, 10 GB on the VEEAM Server and 10 GB on the Equallogics in our new datacenter. A few days ago i tried to do NBD and was surprised, it was about 10% of the 10GB iSCSI Connection, thus it´s like a Gig Connection is used 100%. This brought me to the theory, VMware is limiting the NBD Traffic from ESX/i to about 10-15% of the overall max NIC performance.
Guess what´s happened after SIOC kicked in? It started again with about 100MB/s but after a few secs, it upped for a few mins till the end of the job to 600MB/s (the REAL live-at-this-time measure of Windows, NOT the measure of b+r). Wow, how cool it that? That is equal to the speed i get when using direct SAN connect method with b+r. My guess is if my local harddisk subsystem on the veeam server (which is actually a really really fast one) could afford even more, i could get a bit more out of it

And that whole thing brought me to a new theory: What if NBD is for only one reason limited by ESX/i: ESX/i sees the latency grow and is actually taking care. With SIOC, the hosts in a huge cluster are taking care TOGETHER....interesting. Will investigate more.
best regards,
Joerg