Comprehensive data protection for all workloads
Post Reply
Daveyd
Veteran
Posts: 283
Liked: 11 times
Joined: May 20, 2010 4:17 pm
Full Name: Dave DeLollis
Contact:

Improving speed of iSCSI SAN backups

Post by Daveyd »

Currently, I am running Veeam 5.1 on a physical 2008 R2 server. I have the Veeam server attached to our iSCSI fabric/SAN via a single 1GB NIC using the Microsoft iSCSI initiator software. The Veaem server has Read Only access to all iSCSI LUNs. I have presented an iSCSI LUN to the Veeam server and formatted it as NTFS to store the backups.

Right now, I am running 2 concurrent backup jobs. I watch the 1GB iSCSI link from the Veeam server to the iSCSI SAN and it saturates it completely when the backups run, which is completely understandable. The single 1GB link is obviously the bottleneck. Is there a way, besides Fiber or 10G, to increase the bandwith? I believe you cannot team iSCSI NICs that use the MS initiator (true?). Any ideas?
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Improving speed of iSCSI SAN backups

Post by tsightler »

Sure, the MS iSCSI initiator support multi-path IO (MPIO). Different arrays handle this differently, for example Equallogic uses they're own "plugin" that works with the MS iSCSI initiator and adds some "intelligence" to the load balancing algorithm, and I think EMC does the same, but some just support the native MS iSCSI MPIO. All of our iSCSI host use at least two 1Gb links, for both redundancy and performance.
Daveyd
Veteran
Posts: 283
Liked: 11 times
Joined: May 20, 2010 4:17 pm
Full Name: Dave DeLollis
Contact:

Re: Improving speed of iSCSI SAN backups

Post by Daveyd »

tsightler wrote:Sure, the MS iSCSI initiator support multi-path IO (MPIO). Different arrays handle this differently, for example Equallogic uses they're own "plugin" that works with the MS iSCSI initiator and adds some "intelligence" to the load balancing algorithm, and I think EMC does the same, but some just support the native MS iSCSI MPIO. All of our iSCSI host use at least two 1Gb links, for both redundancy and performance.
I am connected to a DataCore server which sits in front of a HP MSA60 and provides storage thin provisioning for the SAN. I need to install DataCore's MPIO driver in order to utilize MPIO. This will give me just failover capabilities. I am looking to get more performance from my iSCSI connections thats why I was interested in teaming 2 NICs that then use the MS iSCSI initiator/DataCore MPIO driver. It seems as if a 2 NIC iSCSI team is supported using VMware's iSCSI initiator but not Microsofts?
Daveyd
Veteran
Posts: 283
Liked: 11 times
Joined: May 20, 2010 4:17 pm
Full Name: Dave DeLollis
Contact:

Re: Improving speed of iSCSI SAN backups

Post by Daveyd »

anyone?
Gostev
Chief Product Officer
Posts: 31806
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Improving speed of iSCSI SAN backups

Post by Gostev »

I just did a quick search on Google, and here is what I found:
Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. This is because the teaming software that is used in these types of configurations is not owned by Microsoft, and it is considered to be a non-Microsoft product. If you have an issue with network adapter teaming, contact your network adapter vendor for support. If you have contacted Microsoft Support, and they determine that teaming is the source of your problem, you might be required to remove the teaming from the configuration and/or contact the provider of the teaming software.
More information:
Installing and Configuring Microsoft iSCSI Initiator
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Improving speed of iSCSI SAN backups

Post by tsightler »

In general teaming of network interfaces doesn't add significant bandwidth anyway because teaming is generally done on a per-IP, per-port, or per-MAC address basis. Since you'd only have a single TCP connection it would only use two paths. Some Nic bonding drivers and switches do have a "round-robin" mode, which will send one packet per link, but this can have issue with out-of-order packets which will kill performance.

As I stated above, you need to do MPIO within the iSCSI layer. If your storage provider only supports failover for MPIO then I'm not really sure where to tell you to go. I don't really know anything about DataCore, but I was under the impression that they supported load balanced MPIO with they newest stuff. Without good, load-balancing MPIO, iSCSI isn't that interesting, unless you have 10Gb.
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Semrush [Bot] and 245 guests