-
- Veteran
- Posts: 377
- Liked: 86 times
- Joined: Mar 17, 2015 9:50 pm
- Full Name: Aemilianus Kehler
- Contact:
Lower vPower Latency
Hey all,
So after enough fighting with SBJ's I have them set-up to the best of my ability; however I notice that when I watch the latency on the discs of any of the VM's that boot on a SBJ or even any subsequent instant recovered VM's that the latency is TERRIBLE! Average around 100 MS, this is insane. I had it once where my SQL server in my test enviro had a 4000 ms read delay (that 4 seconds for ONE IOP!)
First things first I run my Veeam server inside my VM cluster as a VM. This VM is multihomed default gateway out the datacenter vlan and a non gateway leg in the SAN network.
First I tried using a iSCSI mounted SSD on my FreeNAS server that was a dedicated iSCSI based HDD on my Veeam server. This didn't seem to work out so well, I had assumed cause it was sharing the SAN pipe along with Veeams VMDK (Which was on another SAN but since I don't pay for ESXi enterprise I can't set true LACP based Nics. So even though I have 2 x 1 gb nics, it only ever uses 1 nic at a time. )
Sooo. I figured eh, I have a local RAID 5, consisting of a bunch of 10k SAS discs directly on my hypervisor. So I log into vSphere and add a VMDK to my Veeam server thats directly on this local Datastore of the hypervisor, In this case no other traffic or use on this datastore other than this new mounted vPower NFS stuff. Sadly after spinning up my third VM which was my SQL server, it still was getting 300-400 MS latency
I tried an instant VM recovery and pointed the datastore instead of teh deafult vPower NFS store to the hosts local datastore (the same 10k SAS RAID 5) but again got terrible latency.
At this point I'm thinking of picking up a Intle PCI-e NVMe SSD, plugging it directly into my hypervisor and providing direct hardware passthrough to the VM (I know I'll have to make sure the drivers are installed on the VM and not the host when doing this) or mount it on the host and attach a dedicated VMDK. But I feel this might have more protcol overhead. anyway.
I need suggestions on how I can get decent latency on my test enviro, cause lets face it... its damn slow even after 4 VM's +
Im at my wits end on getting a fast enough dedicated disc to simply spin up around 6 VM's in a test enviro and keep my latency under 50 ms at most.
So after enough fighting with SBJ's I have them set-up to the best of my ability; however I notice that when I watch the latency on the discs of any of the VM's that boot on a SBJ or even any subsequent instant recovered VM's that the latency is TERRIBLE! Average around 100 MS, this is insane. I had it once where my SQL server in my test enviro had a 4000 ms read delay (that 4 seconds for ONE IOP!)
First things first I run my Veeam server inside my VM cluster as a VM. This VM is multihomed default gateway out the datacenter vlan and a non gateway leg in the SAN network.
First I tried using a iSCSI mounted SSD on my FreeNAS server that was a dedicated iSCSI based HDD on my Veeam server. This didn't seem to work out so well, I had assumed cause it was sharing the SAN pipe along with Veeams VMDK (Which was on another SAN but since I don't pay for ESXi enterprise I can't set true LACP based Nics. So even though I have 2 x 1 gb nics, it only ever uses 1 nic at a time. )
Sooo. I figured eh, I have a local RAID 5, consisting of a bunch of 10k SAS discs directly on my hypervisor. So I log into vSphere and add a VMDK to my Veeam server thats directly on this local Datastore of the hypervisor, In this case no other traffic or use on this datastore other than this new mounted vPower NFS stuff. Sadly after spinning up my third VM which was my SQL server, it still was getting 300-400 MS latency
I tried an instant VM recovery and pointed the datastore instead of teh deafult vPower NFS store to the hosts local datastore (the same 10k SAS RAID 5) but again got terrible latency.
At this point I'm thinking of picking up a Intle PCI-e NVMe SSD, plugging it directly into my hypervisor and providing direct hardware passthrough to the VM (I know I'll have to make sure the drivers are installed on the VM and not the host when doing this) or mount it on the host and attach a dedicated VMDK. But I feel this might have more protcol overhead. anyway.
I need suggestions on how I can get decent latency on my test enviro, cause lets face it... its damn slow even after 4 VM's +
Im at my wits end on getting a fast enough dedicated disc to simply spin up around 6 VM's in a test enviro and keep my latency under 50 ms at most.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Lower vPower Latency
Hi,
Could you please tell what VBR version are you currently at? VBR server is a VM running inside one of your hosts, is that correct? Also please describe you repository configuration and location. Another question is related to latency - is it that bad on both writes and read, or on just one of the mentioned?
Thank you.
Could you please tell what VBR version are you currently at? VBR server is a VM running inside one of your hosts, is that correct? Also please describe you repository configuration and location. Another question is related to latency - is it that bad on both writes and read, or on just one of the mentioned?
Thank you.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Lower vPower Latency
vPowerNFS might be the bottleneck in case of parallel restores, it is single-threaded and was not designed to be a high performance NFS server, but rather an acceptable solution to restore a VM or few quickly when a system is down (not the entire production DR). Here're a couple of threads that also discuss vPowerNFS performance:Zew wrote:I need suggestions on how I can get decent latency on my test enviro, cause lets face it... its damn slow even after 4 VM's +
Im at my wits end on getting a fast enough dedicated disc to simply spin up around 6 VM's in a test enviro and keep my latency under 50 ms at most.
vPower NFS Performance
Planning my deployement? Any advice, some questions.
-
- Veteran
- Posts: 377
- Liked: 86 times
- Joined: Mar 17, 2015 9:50 pm
- Full Name: Aemilianus Kehler
- Contact:
Re: Lower vPower Latency
9.0.0.902PTide wrote:Hi,
Could you please tell what VBR version are you currently at? VBR server is a VM running inside one of your hosts, is that correct? Also please describe you repository configuration and location. Another question is related to latency - is it that bad on both writes and read, or on just one of the mentioned?
Thank you.
[PT] UPDATE based on PM:
So the Veeam backup server is a VM sitting on one of these hypervisors. Each hypervisor also has local discs and storage (that used to be used for VM's before the SAN came into play for HA purposes) there are 7 x 10k SAS discs on each hypervisor also in a RAID 5 config with a total capacity of around 1 TB. Now the Veeam servers primary VMDK that Windows and the Veeam software resides on (C:\windows, Program files, program data, etc) are on the VNXe SAN. I then attached a 200 gig VMDK to this VM that is on the localstorage of my hypervisor the VMFS based disc storage. I created a folder on this disc (I made it thin provision by the way, probably not teh best choice for this application hahah, my bad) and its this storage location and folder (the local RAID 5 VMFS datastore), this is where I installed vPower NFS.
Now for the last piece, the backup data. As I had mentioned the Veeam VM has a leg in the SAN vSwitch, as well as its primary connection to everything in the DataCenter vSwitch. All backups are being stored on a iOmega px12-350r SAN, connected in the same SAN subnet as the hypervisors and the other VNXe. However its done via a SMB shared folder, as I managed to put FreeNAS on that backup SAN. Again 2 x 1gig Nics connects that SAN to the SAN subnet. (Everything is connected to our Core layer 3 switch (3750's), Yes ACL's are in place )
-
- Veteran
- Posts: 377
- Liked: 86 times
- Joined: Mar 17, 2015 9:50 pm
- Full Name: Aemilianus Kehler
- Contact:
Re: Lower vPower Latency
Is there anyway on the veeam server to "watch" vPower NFS use system resources. Maybe I could tweak my Veeam VM for faster single threaded application?foggy wrote: vPowerNFS might be the bottleneck in case of parallel restores, it is single-threaded and was not designed to be a high performance NFS server, but rather an acceptable solution to restore a VM or few quickly when a system is down (not the entire production DR).
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Lower vPower Latency
What is the datastore that you've specified in vLab settings?
Thank you.
Thank you.
-
- Veteran
- Posts: 377
- Liked: 86 times
- Joined: Mar 17, 2015 9:50 pm
- Full Name: Aemilianus Kehler
- Contact:
Re: Lower vPower Latency
The Same VMFS based Datastore on the local hypervisor that hosts the Veeam Server.
Who is online
Users browsing this forum: No registered users and 38 guests