vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backups

VMware specific discussions

vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backups

Veeam Logoby vmNik » Fri Apr 10, 2015 12:23 am

Hi everyone!

Can anyone share their Veeam backup speeds?

I've got multiple different setups with iSCSI MPIO, NFS, SMB and DAS via vSphere instance of Veeam and I'm trying to squeeze every little bit of juice out of what we've got to get the best 'realistic' and/or 'potential' backup speeds.

Via same subnet backups (Veeam on same subnet as vSphere hosts) without gateways, I'm achieving 70MB/s speeds for VM backups, to VMDK repositories presented to Veeam via 4x MPIO over 1 IOPS RR PSP. All channels used equally, but speeds are not what I'd like. I'd like to know what people are getting over similar setups to their backup/restore speeds in same/similar scenarios.

Thanks!
vmNik
Novice
 
Posts: 6
Liked: never
Joined: Fri Apr 10, 2015 12:14 am

Re: vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backu

Veeam Logoby Gostev » Fri Apr 10, 2015 1:07 am

Hello.

You can find plenty of existing discussions where people share performance numbers on these forums, so there is really no need to have yet another "post your speed" one. It's also pointless because you will see people reporting anywhere from a few MB/s (when there are issues) to a few hundreds MB/s depending on their hardware and setup ;)

So, I recommend we rather focus on finding an issue in your setup instead.

I can say that 70MB/s is laughable for pretty much any setup on proper production hardware that I can imagine except perhaps when using NBD over 1 Gbps to grab the source data... thus my questions for a start:
- What processing mode does your job use?
- What is reported as a bottleneck by the job?

If you are not sure what are these, please review the sticky FAQ topic.

Thanks!
Gostev
Veeam Software
 
Posts: 21390
Liked: 2349 times
Joined: Sun Jan 01, 2006 1:01 am
Location: Baar, Switzerland

Re: vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backu

Veeam Logoby vmNik » Fri Apr 10, 2015 2:02 am

Gostev,

I'm thinking there are misconfigs in my vSphere realm somewhere and perhaps some switches that may not have Jumbo Frames enabled w/ Flow Control, and it's a work in progress and I'm looking at it.

To answer your questions, bottlenecks vary for the destination repository in use. For the VMDK datastores that are hosted on the SAN, I'd usually looking at the Source for bottlenecks and I've set the Veeam proxy via Direct SAN, Virtual Appliance and Auto.

Thanks.
VCP5-DCV
vmNik
Novice
 
Posts: 6
Liked: never
Joined: Fri Apr 10, 2015 12:14 am

Re: vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backu

Veeam Logoby foggy » Fri Apr 10, 2015 11:21 am

This thread can give you some hints: Equallogic HIT Kit and Direct SAN Access
foggy
Veeam Software
 
Posts: 14736
Liked: 1079 times
Joined: Mon Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson

Re: vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backu

Veeam Logoby Gostev » Fri Apr 10, 2015 12:27 pm

vmNik wrote:I've set the Veeam proxy via Direct SAN, Virtual Appliance and Auto

Not sure what are you saying here as these are mutually exclusive settings, and you can only pick one of them? However, even more important to know is the effective mode used by the job, this is displayed in the job log next to each processed disk.
Gostev
Veeam Software
 
Posts: 21390
Liked: 2349 times
Joined: Sun Jan 01, 2006 1:01 am
Location: Baar, Switzerland

Re: vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backu

Veeam Logoby vmNik » Sat Apr 11, 2015 9:24 pm

I've went ahead and reinstalled Dell MEM1.2 and let it recreate the iSCSI vSS; testing via 4-path MPIO now, using Veeam VM on a host which has a datastore mapped to EQL for 200GB, and via Veeam using Virtual Appliance to back up a VM on the same host, peaked at around 140MB/s, which is a big improvement. I think more can be squeezed out of this 4x1Gbps setup so I'm going to check the switches to ensure they have 9000MTU's and FlowControl, etc. Tested with IOMeter and got around 200MB/s reads to that VMDK mapped drive in a Win7 VM. BTW, this hos is vSphere 5.0 and I'm updating it to 5.5 at the moment. I know there are many improvements in how MEM interfaces with iSCSI with >vSphere 5.0.

Given this scenario with a 4x1Gbps setup to a SAN, should I be expecting >140MB/s transfers from a Veeam VM to a VMDK?

Thanks!
VCP5-DCV
vmNik
Novice
 
Posts: 6
Liked: never
Joined: Fri Apr 10, 2015 12:14 am

Re: vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backu

Veeam Logoby vmNik » Sun Apr 12, 2015 12:25 pm

Edited the Dell MEM config in vSphere on the host I'm testing with to use maxsessions to all 4 (although not really recommended in production) to see what happens when all iSCSI paths are being used and back to the backup testing.

On the same host where MEM is installed with MPIO, Veeam VM is on a local datastore to the host and an EQL VMDK to a 250GB drive for Veeam as a repository. A Veeam proxy (default) is set to use Direct SAN, and when the backup of the VM is started, [san] mode is indeed used. Currently, getting Throughput via Veeam @ 120MB/s Throughput and the processing rate is at 110MB/s (average) [Busy: Source 64% > Proxy 49% > Network 56% Target 32%]

Retested the same backup job using Virtual Appliance (drops to [nbd]) for proxy @ 95MB/s processing rate and 92MB/s Throughput [Busy: Source 71% > Proxy 54% > Network 49% > Target 25%]
VCP5-DCV
vmNik
Novice
 
Posts: 6
Liked: never
Joined: Fri Apr 10, 2015 12:14 am


Return to VMware vSphere



Who is online

Users browsing this forum: cmorrall, pkelly_sts, Vitaliy Kazakov and 29 guests