Host-based backup of VMware vSphere VMs.
Post Reply
cpeheotel
Novice
Posts: 6
Liked: never
Joined: Jan 26, 2015 5:38 am
Full Name: Coreland C Phillips
Contact:

Slow iSCSI w/ Compellent + Veeam 8 + Server 2012R2

Post by cpeheotel »

So I have an odd speedissue that I'm attempting to troubleshoot from as many angles as possible.

Setup:
1 Physical Backup Server - Dell T620 32GB ram, 2 6 Core Xeon Processor, Broadcom 5719 Nic for iSCSI, MD1200 DAS as backup repository

Compellent San with 8 1Gb links across 2 SC40 controllers - all paths active. 2 Disks shelves 1 with 24 10K 400GB sas 1 with 12 4TB 7.2K disks.

Nexus 3048 Switches with 2 vlans for iSCSI 1 vlan for vmotion and 1 vlan for normal data traffic

The veeam server is ONLY doing veeam currently - DPM in the not too distant future. MPIO is setup and working. I've adjusted some registry settings for timeout etc... per compellent documentation. Direct San is working correctly between the veeam server and the SAN.

The problem is, I can't get more than 32MB/s when running a backup job that has one VM in it. Likewise, if a backup contains many VMs, once the bulk are done, the speed plummets.

I can get the Compellent to reliably give out 300MB/s+ when running 12 VM's at once - but that doesn't last long as the smaller VMs in the job start to finish.

The resources on the server seem to barely be getting touched, I've adjusted and tried a myriad of settings on the broadcom nics and can't seem to get anywhere with that. Jumbo Frames are enabled all the way through the path, flow control enabled, etc... I've used veeam for years with less advanced/expensive/'fast' SANs and never had backup throughput this bad. We've got 10Gb network coming for the SAN and backup server... but I'm afraid I won't see any gains given the poor performance I've described above. I've only got two ideas left:
1. Replace the physical broadcom nic with an identical one
2. Try using one of the two onboard intel nics and see if there's any difference.

Does anyone have any ideas? This is getting to be unbearable when doing full backups with large VM's. From the SANs point of view it is using less than 1 7.2K disk worth IOPS when running my test backup job with a single VM in it. :/ If I tell the server to just use network mode (so it's really only using 1 1Gb nic) I'm getting 45-50MB/s through the VM management network. Thanks for any ideas!
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Slow iSCSI w/ Compellent + Veeam 8 + Server 2012R2

Post by foggy »

What are the full bottleneck stats for the backup job (available in the job statistics window, when you right-click the job and select Statistics)?
cpeheotel
Novice
Posts: 6
Liked: never
Joined: Jan 26, 2015 5:38 am
Full Name: Coreland C Phillips
Contact:

Re: Slow iSCSI w/ Compellent + Veeam 8 + Server 2012R2

Post by cpeheotel »

For a job with 1 VM: Source 99% - Proxy 9% - Network 1% - Target 0%

For a job with 12+ VM's: Source 98% - Proxy 35% - Network 11% - Target 11%

Forward Incremental is how I've been rolling lately. I see this behavior on incrementals or fulls. Obviously, more easily seen over the time frame of a full.
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Slow iSCSI w/ Compellent + Veeam 8 + Server 2012R2

Post by foggy »

Try to disable MPIO and see if that helps. Also, here are some tips on improving the throughput of iSCSI connection.
cpeheotel
Novice
Posts: 6
Liked: never
Joined: Jan 26, 2015 5:38 am
Full Name: Coreland C Phillips
Contact:

Re: Slow iSCSI w/ Compellent + Veeam 8 + Server 2012R2

Post by cpeheotel »

I've already tried adjusted the autotuning values and triple checked flow control and tried with and without RSS/Checksum Offload/Large Scale Offload. The switches are also beyond reproach in terms of being TOR Data Center caliber. I'll give a single path a shot early this afternoon. To clarify further, I'm on Veeam 8 (for the first time) with all publicly available patches. The multipath setup on the backup server is using round robin - I've tried LQD with no difference. When I watch just the simple Microsoft performance stats as a backup occurs, every iSCSI nic is happily chugging along Happily around 60Mbps with the load distributed seemingly perfectly across each nic. I'll also mount a test 1TB SAN volume I can format NTFS just to try some simple file copies - I know it's not 100% analogous, but it should provide some insight on large file copies. However, my gut feeling is that those will go quite fast and the issue is Veeam and how or how much data it is requesting from the SAN. I've got no throttling enabled anywhere. I'm wondering if this thread my also describes a related issue to what I'm seeing: http://forums.veeam.com/vmware-vsphere- ... ilit=iscsi
Thanks again!
cpeheotel
Novice
Posts: 6
Liked: never
Joined: Jan 26, 2015 5:38 am
Full Name: Coreland C Phillips
Contact:

Re: Slow iSCSI w/ Compellent + Veeam 8 + Server 2012R2

Post by cpeheotel »

I was unable to test a single path as we have a job for a 2TB VM that's been running since 3AM and is still going :(.

I did present a volume on the san to the server format it and test I/O. I was able to read and transfer data to the local server in the middle of the day while the san was under load at 175MB/s and to write data to the SAN at 300-400MB/s FWIW
cpeheotel
Novice
Posts: 6
Liked: never
Joined: Jan 26, 2015 5:38 am
Full Name: Coreland C Phillips
Contact:

Re: Slow iSCSI w/ Compellent + Veeam 8 + Server 2012R2

Post by cpeheotel »

Testing a single path to SAN currently. Just disabled the other 3 nics. Performance is better 2-2.5X. But the at the OS level I was able to snag more than 3Gb worth of performance writing and close to 2Gb reading using MPIO... so Veeam is not able to utilize the multiple paths correctly...
PMB_CTN
Influencer
Posts: 16
Liked: 2 times
Joined: Nov 14, 2014 2:47 pm
Contact:

Re: Slow iSCSI w/ Compellent + Veeam 8 + Server 2012R2

Post by PMB_CTN »

cpeheotel wrote:Testing a single path to SAN currently. Just disabled the other 3 nics. Performance is better 2-2.5X. But the at the OS level I was able to snag more than 3Gb worth of performance writing and close to 2Gb reading using MPIO... so Veeam is not able to utilize the multiple paths correctly...
Interested to know if this has been resolved yet...?
sid6point7
Influencer
Posts: 11
Liked: never
Joined: Apr 01, 2016 3:09 am
Full Name: Alex Fertmann
Contact:

Re: Slow iSCSI w/ Compellent + Veeam 8 + Server 2012R2

Post by sid6point7 »

Any update on this? I'm in a very similar situation with Veeam 9 Update 1 (greenfield deployment) over Fibre Channel. Slow performance over VDDK however when I present a new LUN to the proxy host and do some speed tests I'm getting 200-300MB/s Write and 1GB/s Read. Is there an issue with the way that veeam is using VDDK? Have not tested disabling multipathing yet. I'm going to give that a go.
Vitaliy S.
VP, Product Management
Posts: 27055
Liked: 2710 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Slow iSCSI w/ Compellent + Veeam 8 + Server 2012R2

Post by Vitaliy S. »

It could be that you've hit VMware interface throttling. If you want to investigate it further, please contact our support team and let them have a look at your configuration.
antipolis
Enthusiast
Posts: 73
Liked: 9 times
Joined: Oct 26, 2016 9:17 am
Contact:

Re: Slow iSCSI w/ Compellent + Veeam 8 + Server 2012R2

Post by antipolis »

Greetings,

I have just finished migrating our production storage to a dell compellent and I also noticed rather slow iSCSI performance from our Windows physical veeam server : our network links only goes up to 300-400mbps, which impacts Veeam backups of course but in my case it's not a veeam problem because directly mapping a LUN, formatting NTFS and going iometer/crystaldiskmark shows the same numbers...

Now after turning jumbo frames windows iSCSI performance improved dramatically as I'm almost saturating the two GBE links (/w MPIO)

From my testing it seems activating jumbo frames on compellent does not make a big difference with ESX hosts (where I had very good speeds already, even with MTU 1500), but it does impact Windows a lot more

Maybe someone else with a similar setup can confirm this ?

edit : win2016 here
Post Reply

Who is online

Users browsing this forum: Coldfirex, NightBird and 91 guests