So after enough fighting with SBJ's I have them set-up to the best of my ability; however I notice that when I watch the latency on the discs of any of the VM's that boot on a SBJ or even any subsequent instant recovered VM's that the latency is TERRIBLE! Average around 100 MS, this is insane. I had it once where my SQL server in my test enviro had a 4000 ms read delay (that 4 seconds for ONE IOP!)
First things first I run my Veeam server inside my VM cluster as a VM. This VM is multihomed default gateway out the datacenter vlan and a non gateway leg in the SAN network.
First I tried using a iSCSI mounted SSD on my FreeNAS server that was a dedicated iSCSI based HDD on my Veeam server. This didn't seem to work out so well, I had assumed cause it was sharing the SAN pipe along with Veeams VMDK (Which was on another SAN but since I don't pay for ESXi enterprise I can't set true LACP based Nics. So even though I have 2 x 1 gb nics, it only ever uses 1 nic at a time.

Sooo. I figured eh, I have a local RAID 5, consisting of a bunch of 10k SAS discs directly on my hypervisor. So I log into vSphere and add a VMDK to my Veeam server thats directly on this local Datastore of the hypervisor, In this case no other traffic or use on this datastore other than this new mounted vPower NFS stuff. Sadly after spinning up my third VM which was my SQL server, it still was getting 300-400 MS latency

I tried an instant VM recovery and pointed the datastore instead of teh deafult vPower NFS store to the hosts local datastore (the same 10k SAS RAID 5) but again got terrible latency.
At this point I'm thinking of picking up a Intle PCI-e NVMe SSD, plugging it directly into my hypervisor and providing direct hardware passthrough to the VM (I know I'll have to make sure the drivers are installed on the VM and not the host when doing this) or mount it on the host and attach a dedicated VMDK. But I feel this might have more protcol overhead. anyway.
I need suggestions on how I can get decent latency on my test enviro, cause lets face it... its damn slow even after 4 VM's +
Im at my wits end on getting a fast enough dedicated disc to simply spin up around 6 VM's in a test enviro and keep my latency under 50 ms at most.