Host-based backup of VMware vSphere VMs.
Post Reply
chimera
Enthusiast
Posts: 57
Liked: 3 times
Joined: Apr 09, 2009 1:00 am
Full Name: J I
Contact:

iSCSI latency with Veeam proxy / hotadd

Post by chimera »

I'm wanting faster backups / shorter backup window so decided to create a virtual appliance and utilise hotadd. I was only using the one default proxy (limited to 2 concurrent tasks due to only having a single quad core) So, I created a virtual Windows 2008 R2 core server, pushed the Veeam backup proxy to it and set it to automatic. Both proxies function ok, and jobs divided between them, but the issue is that when both proxies are in use by different jobs iSCSI latency jumps really high. We're running a 10GbE EqualLogic iSCSI SAN. Under normal circumstances SANHQ reports usual latency of < 10ms (and generally < 6ms) but with the hotadd proxy installed backup performance overall seems to be worse. What appears to be happening is if the hotadd proxy is utilised by a job and another job is running concurrently that is using Direct from SAN proxy, if I monitor VM performance (on both the VM backup proxy and the VM thats being backed up) latency is really high. Additionally, SANHQ reports latency exceptionally high too (as high as 500ms). Backup performance drops pretty hard out, obviously due to the latency issues (as low as 1-2Mbps) Why would read latency jump so high? Both proxies contending for iSCSI bandwidth??? What would the best solution be here? Purchase a 2nd physical CPU for the Veeam box, increase the concurrent tasks and remove the hotadd proxy??
J1mbo
Veteran
Posts: 261
Liked: 29 times
Joined: May 03, 2011 12:51 pm
Full Name: James Pearce
Contact:

Re: iSCSI latency with Veeam proxy / hotadd

Post by J1mbo »

What transfer rates are you seeing? Latency at the disks is a function of IOPS and queue depth; latency at the host is also compounded by network throughput (IO size can drive latency). So really it's a question of understanding if the physical disks are the cause of the latency, or the network between things being saturated.
tsightler
VP, Product Management
Posts: 6011
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: iSCSI latency with Veeam proxy / hotadd

Post by tsightler »

What throughout is SAN HQ reporting during this time? You may simply be saturating your SAN with reads. Are you seeing any writes on the array, perhaps during snapshot removal stages?
chimera
Enthusiast
Posts: 57
Liked: 3 times
Joined: Apr 09, 2009 1:00 am
Full Name: J I
Contact:

Re: iSCSI latency with Veeam proxy / hotadd

Post by chimera »

What is exceptionally strange, is on further investigation this problem lasted for only about 10 minutes. The average IOPS in SANHQ usually hang around 600 reads and 200 writes, and avg I/O rate 30MBsec reads and 1.5MBsec writes. Around this 10 minute period, average IOPS jumped too 3,000 reads and 2,800 writes and avg I/O rate 180MBsec reads and 165MBsec writes. In fact everything spiked massively high at that time except ave I/O size KB, where only writes increased ever so slightly. My concern is if a Veeam backup is run during the day and the SAN goes awol for 10 minutes causing servicing issues.

Ironically it does seem odd that even these "normal" averages appear very low. I would expect Direct SAN backups to push the SAN alot harder - especially when Veeam even reports that the Source is the bottleneck.
Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 20 guests