-
- Novice
- Posts: 6
- Liked: never
- Joined: Apr 10, 2015 12:14 am
- Contact:
vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backups
Hi everyone!
Can anyone share their Veeam backup speeds?
I've got multiple different setups with iSCSI MPIO, NFS, SMB and DAS via vSphere instance of Veeam and I'm trying to squeeze every little bit of juice out of what we've got to get the best 'realistic' and/or 'potential' backup speeds.
Via same subnet backups (Veeam on same subnet as vSphere hosts) without gateways, I'm achieving 70MB/s speeds for VM backups, to VMDK repositories presented to Veeam via 4x MPIO over 1 IOPS RR PSP. All channels used equally, but speeds are not what I'd like. I'd like to know what people are getting over similar setups to their backup/restore speeds in same/similar scenarios.
Thanks!
Can anyone share their Veeam backup speeds?
I've got multiple different setups with iSCSI MPIO, NFS, SMB and DAS via vSphere instance of Veeam and I'm trying to squeeze every little bit of juice out of what we've got to get the best 'realistic' and/or 'potential' backup speeds.
Via same subnet backups (Veeam on same subnet as vSphere hosts) without gateways, I'm achieving 70MB/s speeds for VM backups, to VMDK repositories presented to Veeam via 4x MPIO over 1 IOPS RR PSP. All channels used equally, but speeds are not what I'd like. I'd like to know what people are getting over similar setups to their backup/restore speeds in same/similar scenarios.
Thanks!
-
- Chief Product Officer
- Posts: 32203
- Liked: 7571 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backu
Hello.
You can find plenty of existing discussions where people share performance numbers on these forums, so there is really no need to have yet another "post your speed" one. It's also pointless because you will see people reporting anywhere from a few MB/s (when there are issues) to a few hundreds MB/s depending on their hardware and setup
So, I recommend we rather focus on finding an issue in your setup instead.
I can say that 70MB/s is laughable for pretty much any setup on proper production hardware that I can imagine except perhaps when using NBD over 1 Gbps to grab the source data... thus my questions for a start:
- What processing mode does your job use?
- What is reported as a bottleneck by the job?
If you are not sure what are these, please review the sticky FAQ topic.
Thanks!
You can find plenty of existing discussions where people share performance numbers on these forums, so there is really no need to have yet another "post your speed" one. It's also pointless because you will see people reporting anywhere from a few MB/s (when there are issues) to a few hundreds MB/s depending on their hardware and setup

So, I recommend we rather focus on finding an issue in your setup instead.
I can say that 70MB/s is laughable for pretty much any setup on proper production hardware that I can imagine except perhaps when using NBD over 1 Gbps to grab the source data... thus my questions for a start:
- What processing mode does your job use?
- What is reported as a bottleneck by the job?
If you are not sure what are these, please review the sticky FAQ topic.
Thanks!
-
- Novice
- Posts: 6
- Liked: never
- Joined: Apr 10, 2015 12:14 am
- Contact:
Re: vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backu
Gostev,
I'm thinking there are misconfigs in my vSphere realm somewhere and perhaps some switches that may not have Jumbo Frames enabled w/ Flow Control, and it's a work in progress and I'm looking at it.
To answer your questions, bottlenecks vary for the destination repository in use. For the VMDK datastores that are hosted on the SAN, I'd usually looking at the Source for bottlenecks and I've set the Veeam proxy via Direct SAN, Virtual Appliance and Auto.
Thanks.
I'm thinking there are misconfigs in my vSphere realm somewhere and perhaps some switches that may not have Jumbo Frames enabled w/ Flow Control, and it's a work in progress and I'm looking at it.
To answer your questions, bottlenecks vary for the destination repository in use. For the VMDK datastores that are hosted on the SAN, I'd usually looking at the Source for bottlenecks and I've set the Veeam proxy via Direct SAN, Virtual Appliance and Auto.
Thanks.
VCP5-DCV
-
- Veeam Software
- Posts: 21163
- Liked: 2148 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backu
This thread can give you some hints: Equallogic HIT Kit and Direct SAN Access
-
- Chief Product Officer
- Posts: 32203
- Liked: 7571 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backu
Not sure what are you saying here as these are mutually exclusive settings, and you can only pick one of them? However, even more important to know is the effective mode used by the job, this is displayed in the job log next to each processed disk.vmNik wrote:I've set the Veeam proxy via Direct SAN, Virtual Appliance and Auto
-
- Novice
- Posts: 6
- Liked: never
- Joined: Apr 10, 2015 12:14 am
- Contact:
Re: vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backu
I've went ahead and reinstalled Dell MEM1.2 and let it recreate the iSCSI vSS; testing via 4-path MPIO now, using Veeam VM on a host which has a datastore mapped to EQL for 200GB, and via Veeam using Virtual Appliance to back up a VM on the same host, peaked at around 140MB/s, which is a big improvement. I think more can be squeezed out of this 4x1Gbps setup so I'm going to check the switches to ensure they have 9000MTU's and FlowControl, etc. Tested with IOMeter and got around 200MB/s reads to that VMDK mapped drive in a Win7 VM. BTW, this hos is vSphere 5.0 and I'm updating it to 5.5 at the moment. I know there are many improvements in how MEM interfaces with iSCSI with >vSphere 5.0.
Given this scenario with a 4x1Gbps setup to a SAN, should I be expecting >140MB/s transfers from a Veeam VM to a VMDK?
Thanks!
Given this scenario with a 4x1Gbps setup to a SAN, should I be expecting >140MB/s transfers from a Veeam VM to a VMDK?
Thanks!
VCP5-DCV
-
- Novice
- Posts: 6
- Liked: never
- Joined: Apr 10, 2015 12:14 am
- Contact:
Re: vSphere via iSCSI MPIO on Equallogic SAN | Slow VM Backu
Edited the Dell MEM config in vSphere on the host I'm testing with to use maxsessions to all 4 (although not really recommended in production) to see what happens when all iSCSI paths are being used and back to the backup testing.
On the same host where MEM is installed with MPIO, Veeam VM is on a local datastore to the host and an EQL VMDK to a 250GB drive for Veeam as a repository. A Veeam proxy (default) is set to use Direct SAN, and when the backup of the VM is started, [san] mode is indeed used. Currently, getting Throughput via Veeam @ 120MB/s Throughput and the processing rate is at 110MB/s (average) [Busy: Source 64% > Proxy 49% > Network 56% Target 32%]
Retested the same backup job using Virtual Appliance (drops to [nbd]) for proxy @ 95MB/s processing rate and 92MB/s Throughput [Busy: Source 71% > Proxy 54% > Network 49% > Target 25%]
On the same host where MEM is installed with MPIO, Veeam VM is on a local datastore to the host and an EQL VMDK to a 250GB drive for Veeam as a repository. A Veeam proxy (default) is set to use Direct SAN, and when the backup of the VM is started, [san] mode is indeed used. Currently, getting Throughput via Veeam @ 120MB/s Throughput and the processing rate is at 110MB/s (average) [Busy: Source 64% > Proxy 49% > Network 56% Target 32%]
Retested the same backup job using Virtual Appliance (drops to [nbd]) for proxy @ 95MB/s processing rate and 92MB/s Throughput [Busy: Source 71% > Proxy 54% > Network 49% > Target 25%]
VCP5-DCV
Who is online
Users browsing this forum: Bing [Bot] and 54 guests