Discussions specific to the VMware vSphere hypervisor
tsightler
VP, Product Management
Posts: 5257
Liked: 2122 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: NIC Teaming & Veeam v6

Post by tsightler » Mar 27, 2012 12:56 am

So you only have a single Veeam server and a single VMware ESXi 5 server? Are you doing direct SAN with iSCSI or are you just using Network mode?

lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

Re: NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 27, 2012 8:37 am

I can see that there would be some confusion here so i will now also include source and target SAN

Source SAN FC -> Physical Veeam Backup Server -> 4 x 1Gb/s NIC's -> Cisco 3560E -> 4 x 1Gb/s NIC's -> Vmware ESXi 5 -> Target SAN FC

As i have written from the start i use Fibre Channel and NOT iSCSI, i don't know why that came up.

I have highlighted the part that is important, and the part we are talking about. That is how do i bond the data network from Veeam to Switch and Switch to Vmware ESXi 5 so that i can utilize more of that link.

If i have iSCSI, what SAN i use, if i have on, two or many ESXi servers is not relevant here.

dellock6
Veeam Software
Posts: 5599
Liked: 1563 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: NIC Teaming & Veeam v6

Post by dellock6 » Mar 27, 2012 9:38 am

Ok, but as stated by Tom and in the KB from VMware he linked you:

"Note: One IP to one IP connections over multiple NIC is not supported. (Host A one connection session to Host B uses only one NIC)"

and this is not Veeam limitation, but comes from Etherchannel protocol.

So, Veeam has 1 IP, right?
Target ESXi 5 has 1 IP, right?

If that's the case, you are in the situation where etherchannel cannot help you.
You can speed up the replica if you create multiple replica jobs, running at the same time, from the Veeam servers to different ESXi servers, this would be one-to-many and is supported.
Luca Dell'Oca
EMEA Cloud Architect @ Veeam Software

@dellock6
http://www.virtualtothecore.com/en/
vExpert 2011-2012-2013-2014-2015-2016-2017-2018
Veeam VMCE #1

lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

Re: NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 27, 2012 9:43 am

Yep, i understand that. But he also said something about DNS round robin, and i guess i could Google this but if anyone has any information about how i can set this up with Veeam it would be greatly appreciated.

I understand also that round robin is a poor man's load balancing, but just by chance as i have about 80 replica jobs running that would mean i could use more of the links.

tfloor
Expert
Posts: 270
Liked: 14 times
Joined: Jan 03, 2012 2:02 pm
Full Name: Tristan Floor
Contact:

Re: NIC Teaming & Veeam v6

Post by tfloor » Mar 27, 2012 10:25 am

lars@norstat.no wrote:I could create a picture tomorrow maybe, but it's so simple that maybe this will do ... :

Physical Veeam Backup Server -> 4 x 1Gb/s NIC's -> Cisco 3560E -> 4 x 1Gb/s NIC's -> Vmware ESXi 5

I have not included what's before the Veeam server as this is irrelevant.
same here, but instead of a cisco switch, using a HP switch

dellock6
Veeam Software
Posts: 5599
Liked: 1563 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: NIC Teaming & Veeam v6

Post by dellock6 » Mar 27, 2012 10:39 am

If you look at the VMware Kb articles, it is relevant for both Cisco and HP...

About RR, I never did it, and I'm waiting too from Tom to here how to accomplish it. In high load scenarios, we preferred in the past to go directly to 10 Gbit connections. Easier and not so much more expensive than all the network tasks we are talking about here.
Luca Dell'Oca
EMEA Cloud Architect @ Veeam Software

@dellock6
http://www.virtualtothecore.com/en/
vExpert 2011-2012-2013-2014-2015-2016-2017-2018
Veeam VMCE #1

tsightler
VP, Product Management
Posts: 5257
Liked: 2122 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: NIC Teaming & Veeam v6

Post by tsightler » Mar 27, 2012 1:14 pm

lars@norstat.no wrote:Source SAN FC -> Physical Veeam Backup Server -> 4 x 1Gb/s NIC's -> Cisco 3560E -> 4 x 1Gb/s NIC's -> Vmware ESXi 5 -> Target SAN FC
So this is a replication scenario between just two ESXi servers, or multiple ESXi servers and a single ESXi5 host? I'll assume you are retrieving the data from the Source SAN via direct FC, and you're looking to get maximum throughput on the target.

One option would be to add multiple management IPs to the ESXi host, and register all of those IP addresses in DNS with the same hostname. That way each connection to the host via network mode would use a different IP address. As long as you had multiple jobs, that should spread the load, at least somewhat.

That being said, I suspect you will likely hit throughput limits of the ESXi management console interface before you'll get close to the 4Gb throughput. The ESXi management interface was never really designed to push that much data. If I were attempting to get maximum performance from a similar setup I would suggest configuring at least one, and probably a couple, virtual proxies on the target ESXi 5 host. This allows the data between the source and target to be compressed (typically around 2:1 or more), and, if you have at least two proxies the traffic should be roughly balanced assuming the IP's are assigned so that different links are selected by the hash.

If you want to get by with a single proxy on the target, then I'd configure a few extra IP addresses on the source Veeam server, and add them to DNS using the same host name, that way connections back from the target proxy will be made to different IP addresses on the Veeam server, which should balance the load.

For example, assume the following layout:

Veeam server name: VSrv01 - IP: 192.168.1.10
Veeam proxy name: VPrxy01 - IP: 192.168.1.20

Normally, connections for replicaiton would be made from the target proxy to the source (192.168.1.20 -> 192.168.1.10). However, if you added additional IP addressed to VSrv01 (192.168.1.11, 192.168.1.12, 192.168.1.13) and had those register in DNS with the same hostname, then each connection would randomly connect back to one of the 4 different IP addresses.

I'm also curious, it would require a very large change rate before the link speed would really get to be the limiting factor. Even a single Gb link with no compression can transfer 100GB in less than 15 minutes. In most environments the daily change rate of even TB's of data wouldn't be this high, more typically in the "dozens" of GBs. Are you planning to replicate very often? If so, it seems very unlikely that a 1Gb link would be your limiting factor. If your replicating every 10 minutes, the typical amount of data is likely to be just a few GB's at most, likely taking < 1 minute even with a single 1Gb link. There will be more time spent in taking and removing the snapshot that there will in transferring the data.

tsightler
VP, Product Management
Posts: 5257
Liked: 2122 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: NIC Teaming & Veeam v6

Post by tsightler » Mar 27, 2012 1:27 pm

Setting up RR DNS really doesn't have much with Veeam itself, it's just a method that can be used to direct incoming connections to multiple IP addresses.

It's a little confusing above because I mention two different scenarios. In one, there's a single Veeam server that is talking to a single ESXi host. In that scenario the Veeam server is talking directly to the ESXi hosts so you must load balance the connection "into" the ESXi management interfaces. Basically you configuring multiple management IP addresses for ESXi, and add them all to DNS with the same hostname. That's it. Each time the hostname is resolved, a different IP address is returned from DNS. It's very much "random" allocation, but would spread the load somewhat.

In the scenario where there are two Veeam proxies in use, a source and target proxy, then you have to load balance the connections between the Veeam agents. Veeam data connections are always made from the target proxy back to the source proxy so you have to influence the connection back to the source proxy. To do this you simply add multiple IP addresses to the source proxy, and make sure they are all registered in DNS. Typically for Windows this requires no special configuration other than adding the aliases and making sure they register in DNS with the same hostname.

lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

Re: NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 29, 2012 6:13 pm

Thank, this looks good. FYI i'm trying to replicate about 80 servers continually so theres is quite alot of traffic ....

So then to recap i will try to set up the following:

Source consist of the following:
2 x IBM 3850 x5 with Storwize v7000 SAN connected to 8GB Fabric.
Link from datacenter to disaster site is 5 km long dark fiber with CWDM
One channel is used for Fibre Channel (4Gb/s) to 4Gb/s Fibre switch on disaster site

Disaster site:
Physical Veeam Server IBM 3650 with 12 GB ram (Soon 24Gb) with 8 cores MT (16 virtual cores) Fibre channel card and 4 x 1Gb/s NIC's
Veeam server is connected to target site Fibre switch and can see both source and target SAN directly through fibre channel
Target ESXi host is IBM x3850 M2 (Soon 2 x IBM 3850) with Fibre channel card and 4 x 1Gb/s NIC's. ESXi host can see Source and Target SAN directly.
Target SAN is IBM DS4700 with fibre channel disks and SATA disks.

Config:
Veeam Server should have 4 separate network cards with their own ip that are entered in DNS with the same alias.
ESXi Target server should have bundled 4 cards to one Etherchannel (Route based on ip hash)
Install 4 x Windows Server 2008 R2 x64 servers on target ESXi host to use as proxy (in the future 2 win servers on each target ESXi)
Pysical Veeam Server is setup as source proxy, and proxy on Target ESXi is setup as target proxy

Effect:
Pysical Veeam Server collects data from Source via Fibre Channel directly from SAN
Pysical Veeam Server compresses data to send over LAN to Proxy's (or is this not worth the CPU ? We are talking about replicas NOT backup)
Proxy's connect to the Physical Veeam server to collect data and store it on Target SAN.
Since Physical Veeam Server has 4 different IP's the Proxy's will connect randomly to different NIC's (But better than current setup)

Did i forget anything ?

dellock6
Veeam Software
Posts: 5599
Liked: 1563 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: NIC Teaming & Veeam v6

Post by dellock6 » Mar 29, 2012 9:55 pm

I will personally have placed a virtual Veeam Server in the target site, and let the physical server act only as a proxy, placed directly in the source site.
But your design sounds correct, we are now waiting for the first runs to see the results.

Luca.
Luca Dell'Oca
EMEA Cloud Architect @ Veeam Software

@dellock6
http://www.virtualtothecore.com/en/
vExpert 2011-2012-2013-2014-2015-2016-2017-2018
Veeam VMCE #1

lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

Re: NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 29, 2012 10:01 pm

let me explain what a dark fiber is if you are not familiar ... Although the Disaster site is 5 Km away i get the same exact speed and latency as if the two where in the same room .... Now this was one of the reasons why i didn't want to confuse people that are not familiar with dark fiber. We have our own direct fiber from site to site, without any disruption, routing or switching.

We are using a color splitter or CWDM to split the fiber cable into 8 different channels, this is done with passive, hardware and does not affect anything and the separate channels are just as good as a normal fiber link. The only difference is that you have to use a different SFP with the right wavelength.

You could run 8 x 10Gb/s through this cable for a total of 80Gb/s or you could buy the one with 32 channels for a total of 10Gb/s x 32 = 320 Gb/s :-) :P :P

So for the purpose of this setup let's just say that all the equipment are placed in the same rack.

dellock6
Veeam Software
Posts: 5599
Liked: 1563 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: NIC Teaming & Veeam v6

Post by dellock6 » Mar 29, 2012 10:04 pm

I know what dark fiber is, as I said it was my personal idea, your design is ok :)
Luca Dell'Oca
EMEA Cloud Architect @ Veeam Software

@dellock6
http://www.virtualtothecore.com/en/
vExpert 2011-2012-2013-2014-2015-2016-2017-2018
Veeam VMCE #1

lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

Re: NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 29, 2012 10:13 pm

Ok, but what would you mean i could gain from your idea ? I'm always open for new ideas ...

If i'm not mistaken then with your setup i would get the data from fiber channel as before, just on a different site. The Veeam server would be on the datacenter and i would have to use the LAN network between the two sites that is 2 Gb/s now and 4 Gb/s soon. This would mean that i tax the network between the two sites instead of the unused SAN link. Also in the future i could simply replace the old SAN switch at the disaster site and presto 8 Gb/s without doing anything else ...

Then if the datacenter burned down or something else happened, my main Veeam server would also be lost ...

There might be something i overlooked here so please correct me if i'm wrong ....

dellock6
Veeam Software
Posts: 5599
Liked: 1563 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: NIC Teaming & Veeam v6

Post by dellock6 » Mar 29, 2012 10:26 pm

No, I said to leave the Veeam server in the DR site, but move the physical Veeam PROXY in the source site.
Why? Because it can then pick data from the san and dedupe/compress data "before" sending them over the wire, while in your scenario data optimization happens "after" the data has crossed the dark fiber. Even if you have plenty of bandwidth, this is an optimization.
Luca Dell'Oca
EMEA Cloud Architect @ Veeam Software

@dellock6
http://www.virtualtothecore.com/en/
vExpert 2011-2012-2013-2014-2015-2016-2017-2018
Veeam VMCE #1

lars@norstat.no
Expert
Posts: 109
Liked: 14 times
Joined: Nov 01, 2011 1:44 pm
Full Name: Lars Skjønberg
Contact:

Re: NIC Teaming & Veeam v6

Post by lars@norstat.no » Mar 29, 2012 10:34 pm

Ahhh, missed that one ... Yes i see what you are saying .. But still, then the data would travel along the LAN that is only 2 GB today. instead of 4Gb/s FC and then on the DR site the Veeam server has 4 GB LAN to the target ESXi and maybe soon 10Gb/s

And does Veeam dedupe and compress when replicating ? And is it not a break point somewhere where the time it takes to dedup and compress would take longer than to just transfer the data because the speed is so fast ? Then i would argue that this would be the opposite of an optimization ...

But before i try, i won't know where the breakpoint is ... I will soon know, and then you will all know ... :-)

Post Reply

Who is online

Users browsing this forum: jmcguigan and 22 guests