We've got a Dell MD3000i with 2 hosts attached via Dell Gbit switches with Maximum Jumbo Frame support and Flow Control enabled. Performance of actual VMs has never been better.
We've also got a Dell NAS that is attached to both the LAN and iSCSI networks. It's running Veeam. Backup speed has always been pretty decent, without about 60% utilization on the single iSCSI Gbit interface of the Veeam server during a backup. We've always been very happy with the speed of backups from SAN.
We recently had enabled Jumbo Frames everywhere, both ESX hosts, the switches, all 4ports of the MD3000i, and the Veeam server (Win 2K3 with Broadcom NICs). VMs actually "feeled" a bit more responsive. Without a doubt no slower. However Veeam backup performance tanked when I set it's iSCSI NIC to MTU 9000. Putting it back down to 1500 restores the old speedy backups.
Now this seems counterintuitive to me being that the MD3000i, Dell iSCSI switches, and the ESX iSCSI vSwitches, are all set to use Jumbo Frames. I would think having the Veeam server set to 1500 would cause the backup speed to tank due to a mismatch, but nope, setting the Veeam server to 9000 causes them to tank.
Any ideas on this? It's making our backups take 3 times as long with Jumbo Frames enabled on the Veeam server. Thanks in advance.
-
- Enthusiast
- Posts: 96
- Liked: 13 times
- Joined: Oct 05, 2010 3:27 pm
- Full Name: Rob Miller
- Contact:
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Odd Performance Behavior - Jumbo Frames
Not knowing anything about the switches, my first guess would be an increase in switch latency. Veeam/vStorage API (I don't know which one) doesn't seem to do much readahead or use a very large queue depth, so latency can have a huge impact. While many switches support "jumbo frames" most of them do so using the typical "store and forward" approach, which means they have to receive the entire packet on one port before switching it out the other port. For a jumbo frame this takes 6 times as long so the individual latency of each block may actually be higher with jumbo frames. Some switches allow you to enable a "cut-through" mode to offset this, although I think this is mostly available in 10Gb switches.
Also, is everything on the same layer-2 LAN, or are your switches having to preform layer-3 routing of the jumbo frames. Many such switches support jumbo frames at layer-2, but have to fragment down normal frames to pass a layer-3 packet, which can introduce fragments and overhead.
I've been unable to measure a difference in our environment between jumbo frames and regular frames, but with use hardware HBA's, so there's not much concern for CPU overhead.
Also, is everything on the same layer-2 LAN, or are your switches having to preform layer-3 routing of the jumbo frames. Many such switches support jumbo frames at layer-2, but have to fragment down normal frames to pass a layer-3 packet, which can introduce fragments and overhead.
I've been unable to measure a difference in our environment between jumbo frames and regular frames, but with use hardware HBA's, so there's not much concern for CPU overhead.
-
- Enthusiast
- Posts: 96
- Liked: 13 times
- Joined: Oct 05, 2010 3:27 pm
- Full Name: Rob Miller
- Contact:
Re: Odd Performance Behavior - Jumbo Frames
Thanks for the response. iSCSI switches are dedicated with no other traffic. Nothing but iSCSI. Everything is in default VLAN with no routing. Switches are both Dell PowerConnect 6224's. I can find no way to change the type of switching they are performing.
The only thing that I thought was odd was that when I setting up the Jumbo Frames, I first set everything to 9000. The ESXi hosts had no problem passing traffic through the switches to the MD3000i and back. That worked fine. But as soon as I set the Veeam server to 9000 backups failed. It basically froze. Then I changed the Dell switches to allow the maximum frame size of 9216, and suddenly Veeam backups worked again, albeit I noticed later that it was much slower when having the NIC set to 9000 instead of 1500.
Any good recommendation on what to set the transmit and receive buffers on the NIC to? Should they be at least equal to 9000 as well? I'm not even sure if that number is in bytes or kilobytes in the NIC config.
The only thing that I thought was odd was that when I setting up the Jumbo Frames, I first set everything to 9000. The ESXi hosts had no problem passing traffic through the switches to the MD3000i and back. That worked fine. But as soon as I set the Veeam server to 9000 backups failed. It basically froze. Then I changed the Dell switches to allow the maximum frame size of 9216, and suddenly Veeam backups worked again, albeit I noticed later that it was much slower when having the NIC set to 9000 instead of 1500.
Any good recommendation on what to set the transmit and receive buffers on the NIC to? Should they be at least equal to 9000 as well? I'm not even sure if that number is in bytes or kilobytes in the NIC config.
-
- Enthusiast
- Posts: 96
- Liked: 13 times
- Joined: Oct 05, 2010 3:27 pm
- Full Name: Rob Miller
- Contact:
Re: Odd Performance Behavior - Jumbo Frames
Well I did just read that the switch should be at least 8 bytes higher, so I guess that's why it didn't work at 9000 but does at 9216.
Who is online
Users browsing this forum: Bing [Bot], Google [Bot] and 131 guests