-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
NIC Teaming & Veeam v6
Hi. I have set up Veeam and is now testing som replication jobs before we decide if we are going to purchase Veeam B&R.
The problem i'm having is that i can't seem to get Veeam to use a different NIC for each job. The data flows as follows:
Link is 4 x 1Gb/s
Multiple Jobs ---- SAN --------- Veeam Backup Server ---------- Destination ESXi Server (ESXi 5) ------ Destination SAN
4Gb/s 1 Gb/s 4 Gb/s
If i run traffic from four eksternal sources to four virtual machines on the ESXi server the each use their own NIC as long as the others are saturated.
1 ------------- A
2 --------------B
3 ------------- C
4 ------------- D
1 Gb/s Each
Now, i know that a virtual machine can only use one pNic at once and i guess that the NBD transfer works the same way ... ? But if it does then it should select a different nic for each job.
ESXi server uses Etherchannel and Veeam Server uses LACP.
The problem i'm having is that i can't seem to get Veeam to use a different NIC for each job. The data flows as follows:
Link is 4 x 1Gb/s
Multiple Jobs ---- SAN --------- Veeam Backup Server ---------- Destination ESXi Server (ESXi 5) ------ Destination SAN
4Gb/s 1 Gb/s 4 Gb/s
If i run traffic from four eksternal sources to four virtual machines on the ESXi server the each use their own NIC as long as the others are saturated.
1 ------------- A
2 --------------B
3 ------------- C
4 ------------- D
1 Gb/s Each
Now, i know that a virtual machine can only use one pNic at once and i guess that the NBD transfer works the same way ... ? But if it does then it should select a different nic for each job.
ESXi server uses Etherchannel and Veeam Server uses LACP.
-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
Re: NIC Teaming & Veeam v6
Arghhh, why does the forum change formating ... How irritating ... It removes spaces ...
■■■■■■■ ■■■■■■■ ■■■■■■■ ■■■■■■■ ■■■■■■■ Link is 4 x 1Gb/s
Multiple Jobs ---- SAN --------- Veeam Backup Server ---------- Destination ESXi Server (ESXi 5) ------ Destination SAN
■■■■■■■■■■■■■■■■ 4Gb/s ■■■■■■■■■■■■■■■■ 1 Gb/s ■■■■■■■■■■■■■■■■■■■■■■ 4 Gb/s
■■■■■■■ ■■■■■■■ ■■■■■■■ ■■■■■■■ ■■■■■■■ Link is 4 x 1Gb/s
Multiple Jobs ---- SAN --------- Veeam Backup Server ---------- Destination ESXi Server (ESXi 5) ------ Destination SAN
■■■■■■■■■■■■■■■■ 4Gb/s ■■■■■■■■■■■■■■■■ 1 Gb/s ■■■■■■■■■■■■■■■■■■■■■■ 4 Gb/s
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: NIC Teaming & Veeam v6
Can't fully understand from the graph you made, in Veeam Server on a 1 Gbps link, or on an aggregated 4*1Gbps? I must suppose Veeam is installed on physical server, right?
Also, remember that for NBD, you use the management network of the ESXi, the vNIC of the VMs are not involved...
Also, remember that for NBD, you use the management network of the ESXi, the vNIC of the VMs are not involved...
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
Re: NIC Teaming & Veeam v6
Veeam Server is pysical and has 4Gb/s FC link to SAN.
Veeam Server has 4 x 1Gb/s link (LACP) to Cisco Switch and ESXi Server has 4 x 1Gb/s Link (Etherchannel) to Cisco Switch.
Veeam Server NICs are 1 x intel PRO Dual port Server Adapter and 2 x built in Broadcom NetXtreme II
Veeam Server is IBM x3650 16 Core with 12 GB RAM
ESXi Server has 1 Vswitch with both virtual machine network and management network connected to it.
ESXi Server (Target) is only used for receiving replication traffic and not used for other tasks.
ESXi Server NICs are 4 x built in Broadcom NetXtreme II
ESXi Server is IBM x3850 M2 12 Core With 128 GB ram
Cisco Config for ESXi:
Cisco Config for Veeam Server:
Veeam Server has 4 x 1Gb/s link (LACP) to Cisco Switch and ESXi Server has 4 x 1Gb/s Link (Etherchannel) to Cisco Switch.
Veeam Server NICs are 1 x intel PRO Dual port Server Adapter and 2 x built in Broadcom NetXtreme II
Veeam Server is IBM x3650 16 Core with 12 GB RAM
ESXi Server has 1 Vswitch with both virtual machine network and management network connected to it.
ESXi Server (Target) is only used for receiving replication traffic and not used for other tasks.
ESXi Server NICs are 4 x built in Broadcom NetXtreme II
ESXi Server is IBM x3850 M2 12 Core With 128 GB ram
Cisco Config for ESXi:
Code: Select all
interface Port-channel12
description DESX01
switchport access vlan 999
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101,102,105,107,110,133,134,999
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
interface GigabitEthernet0/7
switchport access vlan 999
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101,102,105,107,110,133,134,999
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
channel-group 12 mode on
!
interface GigabitEthernet0/8
switchport access vlan 999
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101,102,105,107,110,133,134,999
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
channel-group 12 mode on
!
interface GigabitEthernet0/9
switchport access vlan 999
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101,102,105,107,110,133,134,999
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
channel-group 12 mode on
!
interface GigabitEthernet0/10
switchport access vlan 999
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101,102,105,107,110,133,134,999
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
channel-group 12 mode on
Code: Select all
interface Port-channel15
description Veeam
switchport access vlan 101
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101
switchport mode trunk
spanning-tree portfast trunk
interface GigabitEthernet0/3
description ->Veeam
switchport access vlan 101
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101
switchport mode trunk
spanning-tree portfast trunk
channel-group 15 mode active
!
interface GigabitEthernet0/4
description ->Veeam
switchport access vlan 101
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101
switchport mode trunk
spanning-tree portfast trunk
channel-group 15 mode active
!
interface GigabitEthernet0/5
description ->Veeam
switchport access vlan 101
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101
switchport mode trunk
spanning-tree portfast trunk
channel-group 15 mode active
!
interface GigabitEthernet0/6
description ->Veeam
switchport access vlan 101
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 101
switchport mode trunk
spanning-tree portfast trunk
channel-group 15 mode active
-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
Re: NIC Teaming & Veeam v6
You have might given me and idea with the mangement network comment .... The Veeam Server is on a different network than the mangement network on the ESXi server and is therefore MAYBE routed via the Core switch ... I'm gonna test now and get back to you.
-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
Re: NIC Teaming & Veeam v6
Well ... Yes and no .... The servers where on separate networks and the traffic had to go via the core switch to be routed. Fixing this problem resulted in more stable and marginally faster speed, but the Veeam Server is still only using one nic even though i have 4 jobs running in parallel ....
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: NIC Teaming & Veeam v6
Maybe silly question, but since Veeam is in physical, how did you bonded the 4 nics?
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
Re: NIC Teaming & Veeam v6
Software - Intel Proset
Protocol - LACP (IEEE 802.3ad Dynamic Ling Aggregation)
Protocol - LACP (IEEE 802.3ad Dynamic Ling Aggregation)
-
- Veteran
- Posts: 270
- Liked: 15 times
- Joined: Jan 03, 2012 2:02 pm
- Full Name: Tristan Floor
- Contact:
Re: NIC Teaming & Veeam v6
Interesting topic. Have the same problems. Although my speed is acceptable I never see multiple links usedin a lacp trunk.
For me the problem is that multiple jobs gives me not extra bandwidth
For me the problem is that multiple jobs gives me not extra bandwidth
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: NIC Teaming & Veeam v6
What method of hashing are you using for the bonded links? Bonding links will not simply cause traffic to be spread across them, link bonding uses a hasing method to predictably send traffic down a specific link, typically based on src/dst MAC, src/dst IP, src/dsr TCP port, or, in the case of ESX, the virtual port of the switch the NIC is connected to.
By default I believe that Cisco switches use IP SRC/DST hash for IP packets, and MAC SRC/DST hash for non IP packets. That means that, if all of the traffic is between the same two IP addresses, that traffic will never spread across multiple NICs. Typically LACP is only useful in scenarios where there are many-to-many, or many-to-one IP addresses as, in those cases, difference SRC/DST pairs are hashed differently and sent over different links. If you want traffic spread across multiple links, then you need traffic either from or to multiple IP addresses.
The KB article at http://kb.vmware.com/selfservice/micros ... Id=1004048 as some additional information on the one-to-one IP scenario.
By default I believe that Cisco switches use IP SRC/DST hash for IP packets, and MAC SRC/DST hash for non IP packets. That means that, if all of the traffic is between the same two IP addresses, that traffic will never spread across multiple NICs. Typically LACP is only useful in scenarios where there are many-to-many, or many-to-one IP addresses as, in those cases, difference SRC/DST pairs are hashed differently and sent over different links. If you want traffic spread across multiple links, then you need traffic either from or to multiple IP addresses.
The KB article at http://kb.vmware.com/selfservice/micros ... Id=1004048 as some additional information on the one-to-one IP scenario.
-
- Veteran
- Posts: 270
- Liked: 15 times
- Joined: Jan 03, 2012 2:02 pm
- Full Name: Tristan Floor
- Contact:
Re: NIC Teaming & Veeam v6
We use HP switches, but never sees anything about hashing method to choose.
On vmware we have using the default.
On vmware we have using the default.
-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
Re: NIC Teaming & Veeam v6
ESXi = Route based on IP hash
Veeam = LACP
Cisco = src-dst-ip
So you are right, but then the question is ... Do you have an alternative way i can set this up so that it works as i want to or is it simply not possible ?
Do i have to upgrade to 10Gb to be able to push more than 1Gb through Veeam ... 1Gb is after all a serious limitation IMHO ....
Veeam = LACP
Cisco = src-dst-ip
So you are right, but then the question is ... Do you have an alternative way i can set this up so that it works as i want to or is it simply not possible ?
Do i have to upgrade to 10Gb to be able to push more than 1Gb through Veeam ... 1Gb is after all a serious limitation IMHO ....
-
- Veteran
- Posts: 270
- Liked: 15 times
- Joined: Jan 03, 2012 2:02 pm
- Full Name: Tristan Floor
- Contact:
Re: NIC Teaming & Veeam v6
ESXi = Route based on the originating virtual port ID
Veeam = LACP static trunk
HP switches = ?
Veeam = LACP static trunk
HP switches = ?
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: NIC Teaming & Veeam v6
So it's important to remember that Veeam does not do anything at the network stack layer, we simply open IP connections just like any other product. How you load balance across links is really up to the networking configuration on the hosts and hardware.
To get load balancing across multiple NICs with Veeam you must first define exactly which links you are attempting to use and for what traffic. For example, if you have 4x1Gb links talking iSCSI, then you need to use Windows iSCSI mutipathing to balance the load, Veeam has nothing to do with this step.
If you have a repository with 4x1Gb NICs, and you want to spread the load from the proxies, then having multiple proxies should work nicely, since they will all have different IP addresses. If you only have a single proxy, and a single repository, then adding multiple alias IP addresses to the proxy, and using DNS round-robin for the connection is typically quite effective.
If you can define your exact situation I'm sure we can some up with a way to load balance in all cases, although it will never be perfect since the balancing is based on simple, predictable hashes. That's just the way link aggregation is designed. It's goal was not to take 4x1Gb links and turning it into a virtual 4Gb link, it was designed to crudely balance incoming streams across the available links, primarily for "trunk links" of for servers that are server many, many clients. No single stream will ever exceed more than 1Gb.
This is not generally true of load-balancing algorithms used for iSCSI, which balance the load across multiple iSCSI layer-3 links at the storage stack level.
Post a nice picture of your network and we can discuss ways to potentially spread the load.
To get load balancing across multiple NICs with Veeam you must first define exactly which links you are attempting to use and for what traffic. For example, if you have 4x1Gb links talking iSCSI, then you need to use Windows iSCSI mutipathing to balance the load, Veeam has nothing to do with this step.
If you have a repository with 4x1Gb NICs, and you want to spread the load from the proxies, then having multiple proxies should work nicely, since they will all have different IP addresses. If you only have a single proxy, and a single repository, then adding multiple alias IP addresses to the proxy, and using DNS round-robin for the connection is typically quite effective.
If you can define your exact situation I'm sure we can some up with a way to load balance in all cases, although it will never be perfect since the balancing is based on simple, predictable hashes. That's just the way link aggregation is designed. It's goal was not to take 4x1Gb links and turning it into a virtual 4Gb link, it was designed to crudely balance incoming streams across the available links, primarily for "trunk links" of for servers that are server many, many clients. No single stream will ever exceed more than 1Gb.
This is not generally true of load-balancing algorithms used for iSCSI, which balance the load across multiple iSCSI layer-3 links at the storage stack level.
Post a nice picture of your network and we can discuss ways to potentially spread the load.
-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
Re: NIC Teaming & Veeam v6
I could create a picture tomorrow maybe, but it's so simple that maybe this will do ... :
Physical Veeam Backup Server -> 4 x 1Gb/s NIC's -> Cisco 3560E -> 4 x 1Gb/s NIC's -> Vmware ESXi 5
I have not included what's before the Veeam server as this is irrelevant.
Physical Veeam Backup Server -> 4 x 1Gb/s NIC's -> Cisco 3560E -> 4 x 1Gb/s NIC's -> Vmware ESXi 5
I have not included what's before the Veeam server as this is irrelevant.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: NIC Teaming & Veeam v6
So you only have a single Veeam server and a single VMware ESXi 5 server? Are you doing direct SAN with iSCSI or are you just using Network mode?
-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
Re: NIC Teaming & Veeam v6
I can see that there would be some confusion here so i will now also include source and target SAN
Source SAN FC -> Physical Veeam Backup Server -> 4 x 1Gb/s NIC's -> Cisco 3560E -> 4 x 1Gb/s NIC's -> Vmware ESXi 5 -> Target SAN FC
As i have written from the start i use Fibre Channel and NOT iSCSI, i don't know why that came up.
I have highlighted the part that is important, and the part we are talking about. That is how do i bond the data network from Veeam to Switch and Switch to Vmware ESXi 5 so that i can utilize more of that link.
If i have iSCSI, what SAN i use, if i have on, two or many ESXi servers is not relevant here.
Source SAN FC -> Physical Veeam Backup Server -> 4 x 1Gb/s NIC's -> Cisco 3560E -> 4 x 1Gb/s NIC's -> Vmware ESXi 5 -> Target SAN FC
As i have written from the start i use Fibre Channel and NOT iSCSI, i don't know why that came up.
I have highlighted the part that is important, and the part we are talking about. That is how do i bond the data network from Veeam to Switch and Switch to Vmware ESXi 5 so that i can utilize more of that link.
If i have iSCSI, what SAN i use, if i have on, two or many ESXi servers is not relevant here.
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: NIC Teaming & Veeam v6
Ok, but as stated by Tom and in the KB from VMware he linked you:
"Note: One IP to one IP connections over multiple NIC is not supported. (Host A one connection session to Host B uses only one NIC)"
and this is not Veeam limitation, but comes from Etherchannel protocol.
So, Veeam has 1 IP, right?
Target ESXi 5 has 1 IP, right?
If that's the case, you are in the situation where etherchannel cannot help you.
You can speed up the replica if you create multiple replica jobs, running at the same time, from the Veeam servers to different ESXi servers, this would be one-to-many and is supported.
"Note: One IP to one IP connections over multiple NIC is not supported. (Host A one connection session to Host B uses only one NIC)"
and this is not Veeam limitation, but comes from Etherchannel protocol.
So, Veeam has 1 IP, right?
Target ESXi 5 has 1 IP, right?
If that's the case, you are in the situation where etherchannel cannot help you.
You can speed up the replica if you create multiple replica jobs, running at the same time, from the Veeam servers to different ESXi servers, this would be one-to-many and is supported.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
Re: NIC Teaming & Veeam v6
Yep, i understand that. But he also said something about DNS round robin, and i guess i could Google this but if anyone has any information about how i can set this up with Veeam it would be greatly appreciated.
I understand also that round robin is a poor man's load balancing, but just by chance as i have about 80 replica jobs running that would mean i could use more of the links.
I understand also that round robin is a poor man's load balancing, but just by chance as i have about 80 replica jobs running that would mean i could use more of the links.
-
- Veteran
- Posts: 270
- Liked: 15 times
- Joined: Jan 03, 2012 2:02 pm
- Full Name: Tristan Floor
- Contact:
Re: NIC Teaming & Veeam v6
same here, but instead of a cisco switch, using a HP switchlars@norstat.no wrote:I could create a picture tomorrow maybe, but it's so simple that maybe this will do ... :
Physical Veeam Backup Server -> 4 x 1Gb/s NIC's -> Cisco 3560E -> 4 x 1Gb/s NIC's -> Vmware ESXi 5
I have not included what's before the Veeam server as this is irrelevant.
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: NIC Teaming & Veeam v6
If you look at the VMware Kb articles, it is relevant for both Cisco and HP...
About RR, I never did it, and I'm waiting too from Tom to here how to accomplish it. In high load scenarios, we preferred in the past to go directly to 10 Gbit connections. Easier and not so much more expensive than all the network tasks we are talking about here.
About RR, I never did it, and I'm waiting too from Tom to here how to accomplish it. In high load scenarios, we preferred in the past to go directly to 10 Gbit connections. Easier and not so much more expensive than all the network tasks we are talking about here.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: NIC Teaming & Veeam v6
So this is a replication scenario between just two ESXi servers, or multiple ESXi servers and a single ESXi5 host? I'll assume you are retrieving the data from the Source SAN via direct FC, and you're looking to get maximum throughput on the target.lars@norstat.no wrote:Source SAN FC -> Physical Veeam Backup Server -> 4 x 1Gb/s NIC's -> Cisco 3560E -> 4 x 1Gb/s NIC's -> Vmware ESXi 5 -> Target SAN FC
One option would be to add multiple management IPs to the ESXi host, and register all of those IP addresses in DNS with the same hostname. That way each connection to the host via network mode would use a different IP address. As long as you had multiple jobs, that should spread the load, at least somewhat.
That being said, I suspect you will likely hit throughput limits of the ESXi management console interface before you'll get close to the 4Gb throughput. The ESXi management interface was never really designed to push that much data. If I were attempting to get maximum performance from a similar setup I would suggest configuring at least one, and probably a couple, virtual proxies on the target ESXi 5 host. This allows the data between the source and target to be compressed (typically around 2:1 or more), and, if you have at least two proxies the traffic should be roughly balanced assuming the IP's are assigned so that different links are selected by the hash.
If you want to get by with a single proxy on the target, then I'd configure a few extra IP addresses on the source Veeam server, and add them to DNS using the same host name, that way connections back from the target proxy will be made to different IP addresses on the Veeam server, which should balance the load.
For example, assume the following layout:
Veeam server name: VSrv01 - IP: 192.168.1.10
Veeam proxy name: VPrxy01 - IP: 192.168.1.20
Normally, connections for replicaiton would be made from the target proxy to the source (192.168.1.20 -> 192.168.1.10). However, if you added additional IP addressed to VSrv01 (192.168.1.11, 192.168.1.12, 192.168.1.13) and had those register in DNS with the same hostname, then each connection would randomly connect back to one of the 4 different IP addresses.
I'm also curious, it would require a very large change rate before the link speed would really get to be the limiting factor. Even a single Gb link with no compression can transfer 100GB in less than 15 minutes. In most environments the daily change rate of even TB's of data wouldn't be this high, more typically in the "dozens" of GBs. Are you planning to replicate very often? If so, it seems very unlikely that a 1Gb link would be your limiting factor. If your replicating every 10 minutes, the typical amount of data is likely to be just a few GB's at most, likely taking < 1 minute even with a single 1Gb link. There will be more time spent in taking and removing the snapshot that there will in transferring the data.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: NIC Teaming & Veeam v6
Setting up RR DNS really doesn't have much with Veeam itself, it's just a method that can be used to direct incoming connections to multiple IP addresses.
It's a little confusing above because I mention two different scenarios. In one, there's a single Veeam server that is talking to a single ESXi host. In that scenario the Veeam server is talking directly to the ESXi hosts so you must load balance the connection "into" the ESXi management interfaces. Basically you configuring multiple management IP addresses for ESXi, and add them all to DNS with the same hostname. That's it. Each time the hostname is resolved, a different IP address is returned from DNS. It's very much "random" allocation, but would spread the load somewhat.
In the scenario where there are two Veeam proxies in use, a source and target proxy, then you have to load balance the connections between the Veeam agents. Veeam data connections are always made from the target proxy back to the source proxy so you have to influence the connection back to the source proxy. To do this you simply add multiple IP addresses to the source proxy, and make sure they are all registered in DNS. Typically for Windows this requires no special configuration other than adding the aliases and making sure they register in DNS with the same hostname.
It's a little confusing above because I mention two different scenarios. In one, there's a single Veeam server that is talking to a single ESXi host. In that scenario the Veeam server is talking directly to the ESXi hosts so you must load balance the connection "into" the ESXi management interfaces. Basically you configuring multiple management IP addresses for ESXi, and add them all to DNS with the same hostname. That's it. Each time the hostname is resolved, a different IP address is returned from DNS. It's very much "random" allocation, but would spread the load somewhat.
In the scenario where there are two Veeam proxies in use, a source and target proxy, then you have to load balance the connections between the Veeam agents. Veeam data connections are always made from the target proxy back to the source proxy so you have to influence the connection back to the source proxy. To do this you simply add multiple IP addresses to the source proxy, and make sure they are all registered in DNS. Typically for Windows this requires no special configuration other than adding the aliases and making sure they register in DNS with the same hostname.
-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
Re: NIC Teaming & Veeam v6
Thank, this looks good. FYI i'm trying to replicate about 80 servers continually so theres is quite alot of traffic ....
So then to recap i will try to set up the following:
Source consist of the following:
2 x IBM 3850 x5 with Storwize v7000 SAN connected to 8GB Fabric.
Link from datacenter to disaster site is 5 km long dark fiber with CWDM
One channel is used for Fibre Channel (4Gb/s) to 4Gb/s Fibre switch on disaster site
Disaster site:
Physical Veeam Server IBM 3650 with 12 GB ram (Soon 24Gb) with 8 cores MT (16 virtual cores) Fibre channel card and 4 x 1Gb/s NIC's
Veeam server is connected to target site Fibre switch and can see both source and target SAN directly through fibre channel
Target ESXi host is IBM x3850 M2 (Soon 2 x IBM 3850) with Fibre channel card and 4 x 1Gb/s NIC's. ESXi host can see Source and Target SAN directly.
Target SAN is IBM DS4700 with fibre channel disks and SATA disks.
Config:
Veeam Server should have 4 separate network cards with their own ip that are entered in DNS with the same alias.
ESXi Target server should have bundled 4 cards to one Etherchannel (Route based on ip hash)
Install 4 x Windows Server 2008 R2 x64 servers on target ESXi host to use as proxy (in the future 2 win servers on each target ESXi)
Pysical Veeam Server is setup as source proxy, and proxy on Target ESXi is setup as target proxy
Effect:
Pysical Veeam Server collects data from Source via Fibre Channel directly from SAN
Pysical Veeam Server compresses data to send over LAN to Proxy's (or is this not worth the CPU ? We are talking about replicas NOT backup)
Proxy's connect to the Physical Veeam server to collect data and store it on Target SAN.
Since Physical Veeam Server has 4 different IP's the Proxy's will connect randomly to different NIC's (But better than current setup)
Did i forget anything ?
So then to recap i will try to set up the following:
Source consist of the following:
2 x IBM 3850 x5 with Storwize v7000 SAN connected to 8GB Fabric.
Link from datacenter to disaster site is 5 km long dark fiber with CWDM
One channel is used for Fibre Channel (4Gb/s) to 4Gb/s Fibre switch on disaster site
Disaster site:
Physical Veeam Server IBM 3650 with 12 GB ram (Soon 24Gb) with 8 cores MT (16 virtual cores) Fibre channel card and 4 x 1Gb/s NIC's
Veeam server is connected to target site Fibre switch and can see both source and target SAN directly through fibre channel
Target ESXi host is IBM x3850 M2 (Soon 2 x IBM 3850) with Fibre channel card and 4 x 1Gb/s NIC's. ESXi host can see Source and Target SAN directly.
Target SAN is IBM DS4700 with fibre channel disks and SATA disks.
Config:
Veeam Server should have 4 separate network cards with their own ip that are entered in DNS with the same alias.
ESXi Target server should have bundled 4 cards to one Etherchannel (Route based on ip hash)
Install 4 x Windows Server 2008 R2 x64 servers on target ESXi host to use as proxy (in the future 2 win servers on each target ESXi)
Pysical Veeam Server is setup as source proxy, and proxy on Target ESXi is setup as target proxy
Effect:
Pysical Veeam Server collects data from Source via Fibre Channel directly from SAN
Pysical Veeam Server compresses data to send over LAN to Proxy's (or is this not worth the CPU ? We are talking about replicas NOT backup)
Proxy's connect to the Physical Veeam server to collect data and store it on Target SAN.
Since Physical Veeam Server has 4 different IP's the Proxy's will connect randomly to different NIC's (But better than current setup)
Did i forget anything ?
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: NIC Teaming & Veeam v6
I will personally have placed a virtual Veeam Server in the target site, and let the physical server act only as a proxy, placed directly in the source site.
But your design sounds correct, we are now waiting for the first runs to see the results.
Luca.
But your design sounds correct, we are now waiting for the first runs to see the results.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
Re: NIC Teaming & Veeam v6
let me explain what a dark fiber is if you are not familiar ... Although the Disaster site is 5 Km away i get the same exact speed and latency as if the two where in the same room .... Now this was one of the reasons why i didn't want to confuse people that are not familiar with dark fiber. We have our own direct fiber from site to site, without any disruption, routing or switching.
We are using a color splitter or CWDM to split the fiber cable into 8 different channels, this is done with passive, hardware and does not affect anything and the separate channels are just as good as a normal fiber link. The only difference is that you have to use a different SFP with the right wavelength.
You could run 8 x 10Gb/s through this cable for a total of 80Gb/s or you could buy the one with 32 channels for a total of 10Gb/s x 32 = 320 Gb/s
So for the purpose of this setup let's just say that all the equipment are placed in the same rack.
We are using a color splitter or CWDM to split the fiber cable into 8 different channels, this is done with passive, hardware and does not affect anything and the separate channels are just as good as a normal fiber link. The only difference is that you have to use a different SFP with the right wavelength.
You could run 8 x 10Gb/s through this cable for a total of 80Gb/s or you could buy the one with 32 channels for a total of 10Gb/s x 32 = 320 Gb/s
So for the purpose of this setup let's just say that all the equipment are placed in the same rack.
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: NIC Teaming & Veeam v6
I know what dark fiber is, as I said it was my personal idea, your design is ok
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
Re: NIC Teaming & Veeam v6
Ok, but what would you mean i could gain from your idea ? I'm always open for new ideas ...
If i'm not mistaken then with your setup i would get the data from fiber channel as before, just on a different site. The Veeam server would be on the datacenter and i would have to use the LAN network between the two sites that is 2 Gb/s now and 4 Gb/s soon. This would mean that i tax the network between the two sites instead of the unused SAN link. Also in the future i could simply replace the old SAN switch at the disaster site and presto 8 Gb/s without doing anything else ...
Then if the datacenter burned down or something else happened, my main Veeam server would also be lost ...
There might be something i overlooked here so please correct me if i'm wrong ....
If i'm not mistaken then with your setup i would get the data from fiber channel as before, just on a different site. The Veeam server would be on the datacenter and i would have to use the LAN network between the two sites that is 2 Gb/s now and 4 Gb/s soon. This would mean that i tax the network between the two sites instead of the unused SAN link. Also in the future i could simply replace the old SAN switch at the disaster site and presto 8 Gb/s without doing anything else ...
Then if the datacenter burned down or something else happened, my main Veeam server would also be lost ...
There might be something i overlooked here so please correct me if i'm wrong ....
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: NIC Teaming & Veeam v6
No, I said to leave the Veeam server in the DR site, but move the physical Veeam PROXY in the source site.
Why? Because it can then pick data from the san and dedupe/compress data "before" sending them over the wire, while in your scenario data optimization happens "after" the data has crossed the dark fiber. Even if you have plenty of bandwidth, this is an optimization.
Why? Because it can then pick data from the san and dedupe/compress data "before" sending them over the wire, while in your scenario data optimization happens "after" the data has crossed the dark fiber. Even if you have plenty of bandwidth, this is an optimization.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Expert
- Posts: 110
- Liked: 14 times
- Joined: Nov 01, 2011 1:44 pm
- Full Name: Lars Skjønberg
- Contact:
Re: NIC Teaming & Veeam v6
Ahhh, missed that one ... Yes i see what you are saying .. But still, then the data would travel along the LAN that is only 2 GB today. instead of 4Gb/s FC and then on the DR site the Veeam server has 4 GB LAN to the target ESXi and maybe soon 10Gb/s
And does Veeam dedupe and compress when replicating ? And is it not a break point somewhere where the time it takes to dedup and compress would take longer than to just transfer the data because the speed is so fast ? Then i would argue that this would be the opposite of an optimization ...
But before i try, i won't know where the breakpoint is ... I will soon know, and then you will all know ...
And does Veeam dedupe and compress when replicating ? And is it not a break point somewhere where the time it takes to dedup and compress would take longer than to just transfer the data because the speed is so fast ? Then i would argue that this would be the opposite of an optimization ...
But before i try, i won't know where the breakpoint is ... I will soon know, and then you will all know ...
Who is online
Users browsing this forum: Stabz and 52 guests