Discussions specific to the VMware vSphere hypervisor
lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 23, 2012 8:14 am

Hi. I have set up Veeam and is now testing som replication jobs before we decide if we are going to purchase Veeam B&R.

The problem i'm having is that i can't seem to get Veeam to use a different NIC for each job. The data flows as follows:

Link is 4 x 1Gb/s
Multiple Jobs ---- SAN --------- Veeam Backup Server ---------- Destination ESXi Server (ESXi 5) ------ Destination SAN
4Gb/s 1 Gb/s 4 Gb/s


If i run traffic from four eksternal sources to four virtual machines on the ESXi server the each use their own NIC as long as the others are saturated.

1 ------------- A
2 --------------B
3 ------------- C
4 ------------- D
1 Gb/s Each

Now, i know that a virtual machine can only use one pNic at once and i guess that the NBD transfer works the same way ... ? But if it does then it should select a different nic for each job.

ESXi server uses Etherchannel and Veeam Server uses LACP.

lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

Re: NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 23, 2012 9:32 am

Arghhh, why does the forum change formating ... How irritating ... It removes spaces ...


■■■■■■■ ■■■■■■■ ■■■■■■■ ■■■■■■■ ■■■■■■■ Link is 4 x 1Gb/s
Multiple Jobs ---- SAN --------- Veeam Backup Server ---------- Destination ESXi Server (ESXi 5) ------ Destination SAN
■■■■■■■■■■■■■■■■ 4Gb/s ■■■■■■■■■■■■■■■■ 1 Gb/s ■■■■■■■■■■■■■■■■■■■■■■ 4 Gb/s

dellock6
Veeam Software
Posts: 5628
Liked: 1575 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: NIC Teaming & Veeam v6

Post by dellock6 » Mar 23, 2012 1:26 pm

Can't fully understand from the graph you made, in Veeam Server on a 1 Gbps link, or on an aggregated 4*1Gbps? I must suppose Veeam is installed on physical server, right?
Also, remember that for NBD, you use the management network of the ESXi, the vNIC of the VMs are not involved...
Luca Dell'Oca
EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2019
Veeam VMCE #1

lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

Re: NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 23, 2012 1:40 pm

Veeam Server is pysical and has 4Gb/s FC link to SAN.
Veeam Server has 4 x 1Gb/s link (LACP) to Cisco Switch and ESXi Server has 4 x 1Gb/s Link (Etherchannel) to Cisco Switch.
Veeam Server NICs are 1 x intel PRO Dual port Server Adapter and 2 x built in Broadcom NetXtreme II
Veeam Server is IBM x3650 16 Core with 12 GB RAM

ESXi Server has 1 Vswitch with both virtual machine network and management network connected to it.
ESXi Server (Target) is only used for receiving replication traffic and not used for other tasks.
ESXi Server NICs are 4 x built in Broadcom NetXtreme II
ESXi Server is IBM x3850 M2 12 Core With 128 GB ram

Cisco Config for ESXi:

Code: Select all

interface Port-channel12
 description DESX01
 switchport access vlan 999
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 101,102,105,107,110,133,134,999
 switchport mode trunk
 switchport nonegotiate
 spanning-tree portfast trunk

interface GigabitEthernet0/7
 switchport access vlan 999
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 101,102,105,107,110,133,134,999
 switchport mode trunk
 switchport nonegotiate
 spanning-tree portfast trunk
 channel-group 12 mode on
!
interface GigabitEthernet0/8
 switchport access vlan 999
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 101,102,105,107,110,133,134,999
 switchport mode trunk
 switchport nonegotiate
 spanning-tree portfast trunk
 channel-group 12 mode on
!
interface GigabitEthernet0/9
 switchport access vlan 999
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 101,102,105,107,110,133,134,999
 switchport mode trunk
 switchport nonegotiate
 spanning-tree portfast trunk
 channel-group 12 mode on
!
interface GigabitEthernet0/10
 switchport access vlan 999
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 101,102,105,107,110,133,134,999
 switchport mode trunk
 switchport nonegotiate
 spanning-tree portfast trunk
 channel-group 12 mode on
Cisco Config for Veeam Server:

Code: Select all

interface Port-channel15
 description Veeam
 switchport access vlan 101
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 101
 switchport mode trunk
 spanning-tree portfast trunk

interface GigabitEthernet0/3
 description ->Veeam
 switchport access vlan 101
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 101
 switchport mode trunk
 spanning-tree portfast trunk
 channel-group 15 mode active
!
interface GigabitEthernet0/4
 description ->Veeam
 switchport access vlan 101
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 101
 switchport mode trunk
 spanning-tree portfast trunk
 channel-group 15 mode active
!
interface GigabitEthernet0/5
 description ->Veeam
 switchport access vlan 101
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 101
 switchport mode trunk
 spanning-tree portfast trunk
 channel-group 15 mode active
!
interface GigabitEthernet0/6
 description ->Veeam
 switchport access vlan 101
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 101
 switchport mode trunk
 spanning-tree portfast trunk
 channel-group 15 mode active

lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

Re: NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 23, 2012 1:44 pm

You have might given me and idea with the mangement network comment .... The Veeam Server is on a different network than the mangement network on the ESXi server and is therefore MAYBE routed via the Core switch ... I'm gonna test now and get back to you.

lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

Re: NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 23, 2012 2:30 pm

Well ... Yes and no .... The servers where on separate networks and the traffic had to go via the core switch to be routed. Fixing this problem resulted in more stable and marginally faster speed, but the Veeam Server is still only using one nic even though i have 4 jobs running in parallel ....

dellock6
Veeam Software
Posts: 5628
Liked: 1575 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: NIC Teaming & Veeam v6

Post by dellock6 » Mar 23, 2012 2:53 pm

Maybe silly question, but since Veeam is in physical, how did you bonded the 4 nics?
Luca Dell'Oca
EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2019
Veeam VMCE #1

lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

Re: NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 23, 2012 2:56 pm

Software - Intel Proset
Protocol - LACP (IEEE 802.3ad Dynamic Ling Aggregation)

tfloor
Expert
Posts: 270
Liked: 14 times
Joined: Jan 03, 2012 2:02 pm
Full Name: Tristan Floor
Contact:

Re: NIC Teaming & Veeam v6

Post by tfloor » Mar 23, 2012 6:32 pm

Interesting topic. Have the same problems. Although my speed is acceptable I never see multiple links usedin a lacp trunk.
For me the problem is that multiple jobs gives me not extra bandwidth

tsightler
VP, Product Management
Posts: 5296
Liked: 2147 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: NIC Teaming & Veeam v6

Post by tsightler » Mar 23, 2012 8:41 pm

What method of hashing are you using for the bonded links? Bonding links will not simply cause traffic to be spread across them, link bonding uses a hasing method to predictably send traffic down a specific link, typically based on src/dst MAC, src/dst IP, src/dsr TCP port, or, in the case of ESX, the virtual port of the switch the NIC is connected to.

By default I believe that Cisco switches use IP SRC/DST hash for IP packets, and MAC SRC/DST hash for non IP packets. That means that, if all of the traffic is between the same two IP addresses, that traffic will never spread across multiple NICs. Typically LACP is only useful in scenarios where there are many-to-many, or many-to-one IP addresses as, in those cases, difference SRC/DST pairs are hashed differently and sent over different links. If you want traffic spread across multiple links, then you need traffic either from or to multiple IP addresses.

The KB article at http://kb.vmware.com/selfservice/micros ... Id=1004048 as some additional information on the one-to-one IP scenario.

tfloor
Expert
Posts: 270
Liked: 14 times
Joined: Jan 03, 2012 2:02 pm
Full Name: Tristan Floor
Contact:

Re: NIC Teaming & Veeam v6

Post by tfloor » Mar 26, 2012 9:19 am

We use HP switches, but never sees anything about hashing method to choose.
On vmware we have using the default.

lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

Re: NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 26, 2012 9:51 am

ESXi = Route based on IP hash
Veeam = LACP
Cisco = src-dst-ip

So you are right, but then the question is ... Do you have an alternative way i can set this up so that it works as i want to or is it simply not possible ?

Do i have to upgrade to 10Gb to be able to push more than 1Gb through Veeam ... 1Gb is after all a serious limitation IMHO ....

tfloor
Expert
Posts: 270
Liked: 14 times
Joined: Jan 03, 2012 2:02 pm
Full Name: Tristan Floor
Contact:

Re: NIC Teaming & Veeam v6

Post by tfloor » Mar 26, 2012 10:02 am

ESXi = Route based on the originating virtual port ID
Veeam = LACP static trunk
HP switches = ?

tsightler
VP, Product Management
Posts: 5296
Liked: 2147 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: NIC Teaming & Veeam v6

Post by tsightler » Mar 26, 2012 4:32 pm

So it's important to remember that Veeam does not do anything at the network stack layer, we simply open IP connections just like any other product. How you load balance across links is really up to the networking configuration on the hosts and hardware.

To get load balancing across multiple NICs with Veeam you must first define exactly which links you are attempting to use and for what traffic. For example, if you have 4x1Gb links talking iSCSI, then you need to use Windows iSCSI mutipathing to balance the load, Veeam has nothing to do with this step.

If you have a repository with 4x1Gb NICs, and you want to spread the load from the proxies, then having multiple proxies should work nicely, since they will all have different IP addresses. If you only have a single proxy, and a single repository, then adding multiple alias IP addresses to the proxy, and using DNS round-robin for the connection is typically quite effective.

If you can define your exact situation I'm sure we can some up with a way to load balance in all cases, although it will never be perfect since the balancing is based on simple, predictable hashes. That's just the way link aggregation is designed. It's goal was not to take 4x1Gb links and turning it into a virtual 4Gb link, it was designed to crudely balance incoming streams across the available links, primarily for "trunk links" of for servers that are server many, many clients. No single stream will ever exceed more than 1Gb.

This is not generally true of load-balancing algorithms used for iSCSI, which balance the load across multiple iSCSI layer-3 links at the storage stack level.

Post a nice picture of your network and we can discuss ways to potentially spread the load.

lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

Re: NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 26, 2012 9:28 pm

I could create a picture tomorrow maybe, but it's so simple that maybe this will do ... :

Physical Veeam Backup Server -> 4 x 1Gb/s NIC's -> Cisco 3560E -> 4 x 1Gb/s NIC's -> Vmware ESXi 5

I have not included what's before the Veeam server as this is irrelevant.

Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 43 guests