-
- Influencer
- Posts: 14
- Liked: never
- Joined: Jan 25, 2014 4:46 am
- Contact:
Veeam and NIC Teams
So, I have a new question now: Take a Veeam backup to another server with Veeam agent: Will the process saturate LAG (NIC Team/LACP) connections?
Due to the way LAG's work, a connection can only use up to one NIC in a team. It takes multiple connections from server to client to use all NICs in a Team and use the full potential of the LAG.
Data:
http://www.hyper-v.nu/archives/marcve/2 ... rformance/
(This article is implementing LACP in HyperV, but it was the nicest article I could quickly find to explain how LACP works, weather HyperV or not)
http://blog.open-e.com/bonding-versus-mpio-explained/
(Why should not use LACP for iSCSI)
Example:
4 NIC Team, each NIC 1gbps.
1x File Copy, max speed is 1gps copy
4x file copy, all four copy at near 1gbps
So for Veeam to use the full potential of a LAG, it would have to form multiple simulations connections for network traffic to the other agent. If this is not supported, then a single large NIC would be advised.
This is coming from my last thread with Gostev, so I am looking at re-purposing an old server.
For example in one environment I would have:
------------------`````````````-----------```````````````````-------------
HyperV Server | --- 10Gbps----|SWITCH|=====LACP LAG====|DELL 2950|
------------------..................-----------..........................--------------
The Dell 2950 is a re-purposed server for backup storage, running it own agent. I suppose at this point I could also investigate the benefit of making it a Veeam Proxy as well. It would have a LAG of two 1gbps.
So the question was:
Would Veeam form multiple connections and use the full LAG?
Or is it required that both source and destination have a native large bandwidth NIC?
Due to the way LAG's work, a connection can only use up to one NIC in a team. It takes multiple connections from server to client to use all NICs in a Team and use the full potential of the LAG.
Data:
http://www.hyper-v.nu/archives/marcve/2 ... rformance/
(This article is implementing LACP in HyperV, but it was the nicest article I could quickly find to explain how LACP works, weather HyperV or not)
http://blog.open-e.com/bonding-versus-mpio-explained/
(Why should not use LACP for iSCSI)
Example:
4 NIC Team, each NIC 1gbps.
1x File Copy, max speed is 1gps copy
4x file copy, all four copy at near 1gbps
So for Veeam to use the full potential of a LAG, it would have to form multiple simulations connections for network traffic to the other agent. If this is not supported, then a single large NIC would be advised.
This is coming from my last thread with Gostev, so I am looking at re-purposing an old server.
For example in one environment I would have:
------------------`````````````-----------```````````````````-------------
HyperV Server | --- 10Gbps----|SWITCH|=====LACP LAG====|DELL 2950|
------------------..................-----------..........................--------------
The Dell 2950 is a re-purposed server for backup storage, running it own agent. I suppose at this point I could also investigate the benefit of making it a Veeam Proxy as well. It would have a LAG of two 1gbps.
So the question was:
Would Veeam form multiple connections and use the full LAG?
Or is it required that both source and destination have a native large bandwidth NIC?
-
- Veteran
- Posts: 391
- Liked: 39 times
- Joined: Jun 08, 2010 2:01 pm
- Full Name: Joerg Riether
- Contact:
Re: Veeam and NIC Teams
Hi,
there is no simple answer to this question. You have to understand how LAG´s work in detail.
With LAG´s it´s all about hash algorithms - it´s important to have the switches being able to tell network stream a to use this link and network stream b to use this link within a lag. To do this it is essential the switch can separate logic session by let´s say source/dest ip/mac or special crafted combinations of these. So - unfortunately when you just look at the definition http://en.wikipedia.org/wiki/Link_aggregation you won´t learn that much about lag selection algorithms. Advanced switches and operating systems like ESXi 5.5 can use very andvaned techniques like a 1:1 combination of IP´s, Ports, VLAN of source and destination COMBINED - all with the goal to archive a very good distribution with many control parameters to have as many as possible selectors to differentiate between 1:1 network connections as possible.
Now - you have to know that nevertheless there are some really cute technical possibilities to archive a good lag distribution. Please always remember - a one to one connection is a one to one connection is a one to one connection. Or simplified: LAG can´t do magic. If they see a one-to-one coming from exact the same source on the same port to the same target, LAG´s can´t redistribibute to other channels. I like this austin powers slide here very much http://wahlnetwork.com/2014/01/13/vsphe ... -bandaids/ it explains it pretty well
So - you can´t just look at product x or y, you have to look at the whole ecosystem, the operating system, the switches, the network stack, even the mechanisms used INSIDE one os, so it could be a huge difference comparing smbv3 over smbv2 communication - especially when the goal is to have multiple logical separation capable streams. And there is no 100% rule telling either that is possible and that is not possible. There are switch vendors which implement magic-like lag algorithms reaching far out of the ordinary and there are other which only implement the very basics. And then again, there are others who do very special vendor specific things. Then again, all of this is only a success when the source and the destination also have these nice capabilities. As i stated there are some really cool things from some vendors where they try to avoid specific standard limitations like microsoft with their smbv3 approach - and we will certainly see more of these aproaches in the future from other vendors.
Best regards,
Joerg
there is no simple answer to this question. You have to understand how LAG´s work in detail.
With LAG´s it´s all about hash algorithms - it´s important to have the switches being able to tell network stream a to use this link and network stream b to use this link within a lag. To do this it is essential the switch can separate logic session by let´s say source/dest ip/mac or special crafted combinations of these. So - unfortunately when you just look at the definition http://en.wikipedia.org/wiki/Link_aggregation you won´t learn that much about lag selection algorithms. Advanced switches and operating systems like ESXi 5.5 can use very andvaned techniques like a 1:1 combination of IP´s, Ports, VLAN of source and destination COMBINED - all with the goal to archive a very good distribution with many control parameters to have as many as possible selectors to differentiate between 1:1 network connections as possible.
Now - you have to know that nevertheless there are some really cute technical possibilities to archive a good lag distribution. Please always remember - a one to one connection is a one to one connection is a one to one connection. Or simplified: LAG can´t do magic. If they see a one-to-one coming from exact the same source on the same port to the same target, LAG´s can´t redistribibute to other channels. I like this austin powers slide here very much http://wahlnetwork.com/2014/01/13/vsphe ... -bandaids/ it explains it pretty well
So - you can´t just look at product x or y, you have to look at the whole ecosystem, the operating system, the switches, the network stack, even the mechanisms used INSIDE one os, so it could be a huge difference comparing smbv3 over smbv2 communication - especially when the goal is to have multiple logical separation capable streams. And there is no 100% rule telling either that is possible and that is not possible. There are switch vendors which implement magic-like lag algorithms reaching far out of the ordinary and there are other which only implement the very basics. And then again, there are others who do very special vendor specific things. Then again, all of this is only a success when the source and the destination also have these nice capabilities. As i stated there are some really cool things from some vendors where they try to avoid specific standard limitations like microsoft with their smbv3 approach - and we will certainly see more of these aproaches in the future from other vendors.
Best regards,
Joerg
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Veeam and NIC Teams
Bryan,
since Veeam runs over a Windows OS, I would say first of all it all comes down to what Windows can do. Have a look first of all to native drivers on Win2012, or network card drivers on win2008 and before.
In addition to Joerg post, Veeam can create leverage a LAG only if it creates at leat two streams, but on basic switches the hash is created based on mac address or ip address, and in both cases you only have one on you Veeam if you aggregate nics.
Luca.
since Veeam runs over a Windows OS, I would say first of all it all comes down to what Windows can do. Have a look first of all to native drivers on Win2012, or network card drivers on win2008 and before.
In addition to Joerg post, Veeam can create leverage a LAG only if it creates at leat two streams, but on basic switches the hash is created based on mac address or ip address, and in both cases you only have one on you Veeam if you aggregate nics.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Influencer
- Posts: 14
- Liked: never
- Joined: Jan 25, 2014 4:46 am
- Contact:
Re: Veeam and NIC Teams
Thanks for the replies.
Without getting tooooo technical, and correct me if I'm wrong..
The only way to use the full potential of a LAG is use multiple streams (concurrent connections). And then trust the LAG algorithm will distribute the stream across the links for best possible use of the lag. I understand this has to do with MACs and math and so on. Take the HyperV article for example. In his case he was testing multiple streams between two NIC Teams. And the lag was able to load-balance the streams.
But he is a more basic angle. If Veeam only uses one stream from Agent to Repository Agent, then it would always be impossible for the LAG to be fully used. If Veeam used multiple streams, it would at least be possible, perhaps not a guarantee- depending on maybe the switches and the NIC teams and so on, as you say. It would at least be possible. What is the point of a load balancing LAG that cannot distribute streams?
Details aside for a moment, this is why I bring this up:
It was suggested I not use a iSCSI NAS, and instead use a server with an agent. I see a trade here.. With iSCSI I can leverage MPIO for faster backups. With a server and an Agent (without supported LAG) I get a 1gbps for slower backups, but an agent for faster transforms. If there is no lag, the only way to have best of both worlds is jumping from a 1gps NIC to a 10gps, and the accompanying network hardware.
If Veeam used multiple streams, I could see it beating out iSCSI because of also having a local agent. But without LAG, iSCSI with MPIO, just 2 or 3 lines of MPIO, would beat 1gps to an agent. granted, and slower transforms. But why would I repurpose a server with 10TB of storage or so, just to have fast transforms?
Really, I'm trying to find a way to leverage multiple NICs on a Veeam Backup Server (Running Server 2012 R2) for the backups to run as fast as an iSCSI with MPIO. This is all I'm trying to do.
Without getting tooooo technical, and correct me if I'm wrong..
The only way to use the full potential of a LAG is use multiple streams (concurrent connections). And then trust the LAG algorithm will distribute the stream across the links for best possible use of the lag. I understand this has to do with MACs and math and so on. Take the HyperV article for example. In his case he was testing multiple streams between two NIC Teams. And the lag was able to load-balance the streams.
But he is a more basic angle. If Veeam only uses one stream from Agent to Repository Agent, then it would always be impossible for the LAG to be fully used. If Veeam used multiple streams, it would at least be possible, perhaps not a guarantee- depending on maybe the switches and the NIC teams and so on, as you say. It would at least be possible. What is the point of a load balancing LAG that cannot distribute streams?
Details aside for a moment, this is why I bring this up:
It was suggested I not use a iSCSI NAS, and instead use a server with an agent. I see a trade here.. With iSCSI I can leverage MPIO for faster backups. With a server and an Agent (without supported LAG) I get a 1gbps for slower backups, but an agent for faster transforms. If there is no lag, the only way to have best of both worlds is jumping from a 1gps NIC to a 10gps, and the accompanying network hardware.
If Veeam used multiple streams, I could see it beating out iSCSI because of also having a local agent. But without LAG, iSCSI with MPIO, just 2 or 3 lines of MPIO, would beat 1gps to an agent. granted, and slower transforms. But why would I repurpose a server with 10TB of storage or so, just to have fast transforms?
Really, I'm trying to find a way to leverage multiple NICs on a Veeam Backup Server (Running Server 2012 R2) for the backups to run as fast as an iSCSI with MPIO. This is all I'm trying to do.
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Veeam and NIC Teams
Veeam is indeed able to create multiple streams between proxy and repository, from the user guide:
"Normally, within one backup session Veeam Backup & Replication opens five parallel TCP/IP connections to transfer data from source to target"
the problem is, this happens inside a single ip-to-ip connection, so I'm not sure it has something to do with LAG. What can help you better, if you create a LAG for the repository, is to have at least two proxies with different ip addresses, and enable parallel processing. In this way, at anytime you will have at least two complete separated streams coming from the two proxies, and LAG "should" be able to balance them.
Luca.
"Normally, within one backup session Veeam Backup & Replication opens five parallel TCP/IP connections to transfer data from source to target"
the problem is, this happens inside a single ip-to-ip connection, so I'm not sure it has something to do with LAG. What can help you better, if you create a LAG for the repository, is to have at least two proxies with different ip addresses, and enable parallel processing. In this way, at anytime you will have at least two complete separated streams coming from the two proxies, and LAG "should" be able to balance them.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Influencer
- Posts: 14
- Liked: never
- Joined: Jan 25, 2014 4:46 am
- Contact:
Re: Veeam and NIC Teams
dellock6,
Thanks for finding that gem, and the idea on the Proxy. As far as I know, LAG handles things per live connection, not per ip. The underlying hardware decides how to distribute connections across the links. Take the HyperV article for example, where this is demonstrated. In fact, in HyperV, traffic from multiple VMs can be load balanced across several links in a LAG at any given time.
Never-the-less, the information about "parallel connections" is enough for me to go ahead and at least take the time to TRY Veeam Agent to Agent on a LAG and see what happens. The project is some time out, perhaps 2 months. But I will try to report what I find.
thanks dellock6,
Bryan
Thanks for finding that gem, and the idea on the Proxy. As far as I know, LAG handles things per live connection, not per ip. The underlying hardware decides how to distribute connections across the links. Take the HyperV article for example, where this is demonstrated. In fact, in HyperV, traffic from multiple VMs can be load balanced across several links in a LAG at any given time.
Never-the-less, the information about "parallel connections" is enough for me to go ahead and at least take the time to TRY Veeam Agent to Agent on a LAG and see what happens. The project is some time out, perhaps 2 months. But I will try to report what I find.
thanks dellock6,
Bryan
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Veeam and NIC Teams
If the hash is only based on IP then a single proxy to repository probably won't have any benefit, however, most LAG configurations allow for the use of TCP port information to also be applied to the hash algorithm regarding which link to use. Since the agents by default create 5 connection, there will be five ports used on the source side of the agent which should be enough to provide some load balancing in the LAG assuming src/dst port information is used for the hash. Note that this must be configured on the systems and on the switch or the behavior can be unpredictable. Here's what a single job running between proxy and repository would typically look like from a TCP connection perspective:
So, as you can see, the source ports are all different for each of the connection so of the hash algorithm uses src/dst TCP port as part of it's LAG hashing algorthm. Most switch at least use the source port info, which should still be OK, but might cause some unexptected balancing due to the direction that the connections are created.
Code: Select all
TCP 192.168.60.9:64415 192.168.60.29:2501 ESTABLISHED
TCP 192.168.60.9:64416 192.168.60.29:2500 ESTABLISHED
TCP 192.168.60.9:64417 192.168.60.29:2500 ESTABLISHED
TCP 192.168.60.9:64418 192.168.60.29:2500 ESTABLISHED
TCP 192.168.60.9:64419 192.168.60.29:2500 ESTABLISHED
TCP 192.168.60.9:64420 192.168.60.29:2500 ESTABLISHED
-
- Influencer
- Posts: 14
- Liked: never
- Joined: Jan 25, 2014 4:46 am
- Contact:
Re: Veeam and NIC Teams
This is precisely what I was looking for!
It seems at the outset that, assuming things are ok at the network level, Veeam's use of multiple connections means might well saturate a LAG. This makes me feel even more hopeful about the future build.
And also, I am by no means I LAG expert, but I think this is consistent. Imagine it was a Terminal Server on a LAG, with 50 users connected to google on port 80 or watching videos on youtube... I'm sure if it works there, it would work here as well.
Thank tslighter for the netstat.
Bryan
It seems at the outset that, assuming things are ok at the network level, Veeam's use of multiple connections means might well saturate a LAG. This makes me feel even more hopeful about the future build.
And also, I am by no means I LAG expert, but I think this is consistent. Imagine it was a Terminal Server on a LAG, with 50 users connected to google on port 80 or watching videos on youtube... I'm sure if it works there, it would work here as well.
Thank tslighter for the netstat.
Bryan
-
- Veteran
- Posts: 391
- Liked: 39 times
- Joined: Jun 08, 2010 2:01 pm
- Full Name: Joerg Riether
- Contact:
Re: Veeam and NIC Teams
Yep. But please remeber what Tom told you, all devices must support a special lag algorithm where the source port can be included in the overall hashing combination. If you can´t 100% ensure this, you will find yourself in a situation where any lag-specific stuff will be unpredictable. A good way to start would be to monitor the separate lag channels via the switch in a controlled testlab.
Best regards,
Joerg
Best regards,
Joerg
-
- Influencer
- Posts: 14
- Liked: never
- Joined: Jan 25, 2014 4:46 am
- Contact:
Re: Veeam and NIC Teams
What am I looking for beyond the appropriate NIC Teaming (setup in LACP mode) on the server itself, and the LAG being setup in the switches (In this case Cisco)?
-
- Veteran
- Posts: 391
- Liked: 39 times
- Joined: Jun 08, 2010 2:01 pm
- Full Name: Joerg Riether
- Contact:
Re: Veeam and NIC Teams
You are asking about the Hyper-V side, correct? Again, there is no simple answer to that, especially when it comes to the Hypervisor side and very especially if the Hypervisor is Hyper-V. Starting with switch independant teams (where you will not use lags) or switch dependant teams which have to be also configured 1:1 on the switch side. This article (please also read the comments) focuses on some of these things http://www.aidanfinn.com/?p=12572
But this is only the tip of the iceberg. Digging deeper into Hyper-V there are some scenarios (for example when using networking in a failover cluster Hyper-V system) where other rules for certain network traffic apply which you need to make yourself familiar with.
To dig deeper you could start with these two articles - but you need to know much much more to fully understand all networking related specialities when it comes to aggregation for networking of ANY kind in combination with Hyper-V systems.
http://blogs.technet.com/b/privatecloud ... loads.aspx
http://blogs.technet.com/b/josebda/arch ... b-3-0.aspx
Rember that little austin powers picture i mentioned in my first post?
Don´t get me wrong - i love LAGs and MLAGs - ON THE BACKBONE! To be clear about that: Switch talking to switch. What i don´t like because i found it brings me much more complexity and problems than it could solve is static switch dependant trunks on the host side. I totally agree with this guy here: http://wahlnetwork.com/2013/03/05/stop- ... ere-hosts/
This is not only valid for vsphere but for any hypervisor solution i am aware of. But please know this: This is only my very personal opinion.
Best regards,
Joerg
But this is only the tip of the iceberg. Digging deeper into Hyper-V there are some scenarios (for example when using networking in a failover cluster Hyper-V system) where other rules for certain network traffic apply which you need to make yourself familiar with.
To dig deeper you could start with these two articles - but you need to know much much more to fully understand all networking related specialities when it comes to aggregation for networking of ANY kind in combination with Hyper-V systems.
http://blogs.technet.com/b/privatecloud ... loads.aspx
http://blogs.technet.com/b/josebda/arch ... b-3-0.aspx
Rember that little austin powers picture i mentioned in my first post?
Don´t get me wrong - i love LAGs and MLAGs - ON THE BACKBONE! To be clear about that: Switch talking to switch. What i don´t like because i found it brings me much more complexity and problems than it could solve is static switch dependant trunks on the host side. I totally agree with this guy here: http://wahlnetwork.com/2013/03/05/stop- ... ere-hosts/
This is not only valid for vsphere but for any hypervisor solution i am aware of. But please know this: This is only my very personal opinion.
Best regards,
Joerg
-
- Veteran
- Posts: 391
- Liked: 39 times
- Joined: Jun 08, 2010 2:01 pm
- Full Name: Joerg Riether
- Contact:
Re: Veeam and NIC Teams
And as this is very often misunderstood, some people think LAGs and LACP are two very different things - this is wrong. Very very very simplified said and only my opinion: LACP is only a thing designed to make life easier for beginners using LAGs. So - you can (if the device supports it) enable LACP on existing LAGs. People who know 100% what they are doing don´t need LACP (but then again - only my very personal opinion). Please read this if you´d like to dig deeper: http://wahlnetwork.com/2012/05/09/demys ... r-vsphere/
-
- Expert
- Posts: 115
- Liked: 8 times
- Joined: Jun 22, 2016 9:47 pm
- Full Name: Daniel Kaiser
- Contact:
Re: Veeam and NIC Teams
my backup now run at full 1gbps. speed 100-120MBps continuous.
if i make team with two 1gbps cards, will this can make throughput more than 1gbps?
if i make team with two 1gbps cards, will this can make throughput more than 1gbps?
-
- Veteran
- Posts: 500
- Liked: 109 times
- Joined: Oct 27, 2012 1:22 am
- Full Name: Clint Wyckoff
- Location: Technical Evangelist
- Contact:
Re: Veeam and NIC Teams
This is possible as long as you have the team setup with the correct teaming configuration. If you're running Server 2012R2 you'll want to use the built-in NIC Teaming that is setup through Server Manager to configure and manage the team.
-
- Expert
- Posts: 115
- Liked: 8 times
- Joined: Jun 22, 2016 9:47 pm
- Full Name: Daniel Kaiser
- Contact:
Re: Veeam and NIC Teams
Yes, this work now. We make team from 2 cheap nics, teaming mode switch independent, load balancing mode address hash. Windows show it as 2gbps, all works.
-
- Enthusiast
- Posts: 48
- Liked: 7 times
- Joined: Jun 18, 2013 8:12 am
- Full Name: Nils Petersen
- Contact:
[MERGED] Using multiple NICs for proxy and repository
Sorry if this has been asked before, but I wasn't able to find an answer to my problem.
We've recently upgraded our connection between two office locations from single to dual gigabit, and now I'm trying to make use of this for backup.
Scenario:
Location 1:
What I did:
Experimentally, I've tried adding static routes for A-C, B-D / C-A, D-B on each side. This had no effect.
How do I get Veeam proxy/repository to rotate through the source/destination addresses?
We've recently upgraded our connection between two office locations from single to dual gigabit, and now I'm trying to make use of this for backup.
Scenario:
Location 1:
- 2 ESXi hosts (5.1)
iSCSI storage
virtual proxy (Win2012R2) with direct SAN access through ESXi storage NICs (wasn't too happy with hotadd in Veeam 8)
- Veeam Backup server (Win2008R2, VBR 9.0u2), local backup repository (DAS)
What I did:
- added a 2nd NIC to the backup server, added 2nd IP address (B) to already existing IP (A) in server subnet
added a 2nd vNIC to the proxy, bound both vNIC to dedicated host NICs through extra port groups with a single NIC, added 2nd IP (D) to already existing IP (C) in server subnet
the new IP addresses were selected to vary traffic distribution over the trunk: B=A+1, D=C+3
verified that traffic distributes over both LACP links and both ESXi NICs when using different connection combinations A-B, A-C, B-C, and B-D
the setup uses DNS names for the repository and the proxy, so I added 2nd IP addresses to the server's and the proxy's DNS records
verified that both IP addresses are resolved in turn through DNS (round robin with Windows 2012R2 DNS server)
Experimentally, I've tried adding static routes for A-C, B-D / C-A, D-B on each side. This had no effect.
How do I get Veeam proxy/repository to rotate through the source/destination addresses?
-
- Enthusiast
- Posts: 48
- Liked: 7 times
- Joined: Jun 18, 2013 8:12 am
- Full Name: Nils Petersen
- Contact:
Re: Using multiple NICs for proxy and repository
Apparently, there is no easy solution...
In case you're looking at a similar problem, this is the - somewhat awkward - solution:
PS: small correction from the first post: with our HP 2910 switches I needed to use the IP addresses B=A+1 and D=C+2
In case you're looking at a similar problem, this is the - somewhat awkward - solution:
- deactivated 1st proxy with DNS name/IP address in server subnet which I can't route as I need to
installed a 2nd virtual proxy, configured iSCSI etc.
added IP addresses and extra DNS names for both proxies in two additional IP subnets that I run over the server subnet anyway - this seems to be necessary to get the required routing
added extra IP addresses in these subnets to the backup/repository server
add the two additional subnets in Menu -> Network Traffic -> Networks as preferred networks (completely missed that one - thanks, Aleksej!)
set both Windows servers underlying the proxies to 'Run server on this side' (Backup Infrastructure -> Properties -> Credentials -> Ports) - otherwise the proxies connect to the repository's first IP address/first NIC only
set the backup job to use these two proxies
PS: small correction from the first post: with our HP 2910 switches I needed to use the IP addresses B=A+1 and D=C+2
-
- Enthusiast
- Posts: 48
- Liked: 7 times
- Joined: Jun 18, 2013 8:12 am
- Full Name: Nils Petersen
- Contact:
Re: Using multiple NICs for proxy and repository
Small update: Veeam support has indicated that you can also use a single proxy instance with multiple NICs/IP addresses if you separate both (all) routes by using multiple names each resolving to one of the various IP addresses or using IP addresses directly. Each name/IP has to be added as a separate proxy/server. I haven't tried it yet but I'm positive that'll work as well.
-
- Enthusiast
- Posts: 48
- Liked: 7 times
- Joined: Jun 18, 2013 8:12 am
- Full Name: Nils Petersen
- Contact:
Re: Veeam and NIC Teams
Another - late - update:
With two proxies, each in its own (logical) subnet, both links can be utilized 100%. We were able to double our backup speed and halve the backup window - perfect load balancing.
If you try to build this yourself you need:
With two proxies, each in its own (logical) subnet, both links can be utilized 100%. We were able to double our backup speed and halve the backup window - perfect load balancing.
If you try to build this yourself you need:
- * one proxy for each one of the network links - alternatively, one proxy with several vNICs (bound to separate host NICs)
* put each proxy (or vNIC) into its own "special" IP subnet, add DNS names for these - you can also use a DNS name with a normal IP address, just don't use it when adding the proxy
* add an IP out of the special subnets to your repository server's NICs as required to reach your proxy of choice with this NIC
* check whether traffic within each of the subnets runs through the NICs, links and vNICs as required (using e.g. iperf) - you're running this subnet on top of your normale subnet which might require configuring your network security
* add each proxy to your Veeam infrastructure with the DNS name using the "special" IPs - this forces Veeam to use the IP subnet designed to route the traffic through your trunk links
* configure the proxy to "run the server on this side" (Managed Servers -> Microsoft Windows -> proxy name -> Properties -> Credentials -> Ports) - otherwise, the proxies connect to your repository using the standard subnet
* configure your backup jobs to use these proxies
Who is online
Users browsing this forum: 00ricbjo, ottl05, sivein and 146 guests