Host-based backup of VMware vSphere VMs.
Post Reply
serverboy
Novice
Posts: 3
Liked: never
Joined: Oct 26, 2021 12:42 pm
Full Name: Jay Bojackson
Contact:

Force traffic down both 10Gbe nics

Post by serverboy »

Hi

I am doing some testing with Veeam in my homelab to get a better understanding of how it all works. I have manged to get it all up and running and works great except for one thing which is bugging me. I can't seem to get the traffic to go through both my 10Gbe nics.

My setup consists of 3 hosts in a datacenter. All 3 hosts are on ESXi V7.0.3. Each host have a two 10GBe nics which are part of a dvswitch which handles vm traffic, vlans and external access.

My 3 hosts, freenas unit (used to present datastores to vsphere via iscsi) and synology are alll connected to a mikrotik 16 sfp+ switch. The Freenas unit and Synology are both fitted with dual port sfp+ nics and are connected with DAC cables.

All 3 hosts have 6VMKernels - 2 for ISCSI, 2 for Vmotion and 2 for management.. Each are configured vai the traffic shaping to go to different nics.

The Synology is used as a repository and mapped via two different subnets to my VEEAM backup vm via iscsi.

All 3 hosts are connected to vcenter.
The hosts and vcenter are on the same network.
All the three hosts including their two 10Gbe nics on connected to a dvswitch

I have given the Veeam VM 4 nics - two are called ISCSI-V40 and ISCSI-V41 for the MPIO ISCSI for my synology which works fine. Traffic is passed down both nics during the backups

However the other two. Traffic only pulls down on one of them. They are both on the same subnet as my vcenter and esxi hosts.

I have added the vcenter into veeam via its dns name.

How can i make it possible so when a job kicks off it will pull traffic down both nics?
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Force traffic down both 10Gbe nics

Post by HannesK »

Hello,
and welcome to the forums.

My understanding is, that you are asking for the network (NBD) mode (iSCSI multipathing seems to work). There is an existing thread on that with two statements that should help with the solution. If you add add more proxies, that should load-balance then on the management ports.

Let us know, how it goes :-)

Best regards,
Hannes
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Re: Force traffic down both 10Gbe nics

Post by Mehnock »

It is not a best practice to have two NIC on the same host, on the same subnet - unless you are doing some type of bonding of interfaces (supported by the host and the switch).

Even in different subnets, data will travel down only one path per session.

Your best bet is to bond those two interfaces but that may still not work to give you double the bandwidth per session.
serverboy
Novice
Posts: 3
Liked: never
Joined: Oct 26, 2021 12:42 pm
Full Name: Jay Bojackson
Contact:

Re: Force traffic down both 10Gbe nics

Post by serverboy »

yeh the two nics on the same network as my hosts did nothing. Still went down the one nic.

HannesK mentioned a post above. They talk about deploying "PROXIES" on other hosts. Can I not just use one Veeam backup server?

And if i do i need to deploy a "PROXY" on each host. Can it just be a Windows 10 Pro vm or does it have to be a Windows Server Edition vm? How many nics would i add two each on the esxi host network so they connect to management? or my iscsi port groups which I use for Freenas datastores and Synology veeam repository?
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Force traffic down both 10Gbe nics

Post by veremin »

I think it might help, if you share the short description of the problem you are facing - why is there a need to utilize NIC teaming?

Sure, one backup server is totally capable of backing up and storing VMs, that's why it serves the role of default proxy and repository servers. However, your scenario requires additional manipulations including deploying proxy servers on each production host.

Proxy role can be assigned to Windows 10 machine, 11a supports versions from 1803 to version 21H11. Other supported OS can be found here.

Thanks!
micoolpaul
Veeam Vanguard
Posts: 211
Liked: 107 times
Joined: Jun 29, 2015 9:21 am
Full Name: Michael Paul
Contact:

Re: Force traffic down both 10Gbe nics

Post by micoolpaul »

As others have said, bonding is the best way forward. When you don't bond into a single logical interface, you have multiple devices that are load balanced against (this isn't Veeam specific but happens at the OS' networking stack). Depending on the OS, load balancing tends to be performed based on either Layer2, Layer2+3 or Layer3+4 metrics (and I'm sure there's some edge case that does it completely different to this). But the point is, it's looking at source and destination information, if you've got a connection from your Veeam proxy for example to your vSphere environment, it's going to be a single source and destination, so the easiest (and best way) around this is to create it into a bonded interface, that way it's not seen as 2x10GbE, but instead it's seen as a 20GbE connection, though you've got to have something else in your network and the network backbone itself configured to be able to leverage this extra bandwidth before you'll see any benefits.
-------------
Michael Paul
Veeam Legend | Veeam Certified Architect | Veeam Vanguard
serverboy
Novice
Posts: 3
Liked: never
Joined: Oct 26, 2021 12:42 pm
Full Name: Jay Bojackson
Contact:

Re: Force traffic down both 10Gbe nics

Post by serverboy »

So I have made some changes now.
Moved all VMs to be stored on my Freenas datastore which is present via ISCSI to my esxi hosts. Previously I had some VMs on the onboard storage which was NVME flash. Thre Freenas datastore is also all NVME flash.

I have noticed now when I run my backup job which is a job to backup the whole vCenter the backup goes through my ISCSI 40 and ISCSI 41 on the VEEAM server. However when the job processes the VEEAM server itself to backup. The traffic goes down the single nic which is used to connect to the vCenter.

When VM's were on the onboard storage it looks like the traffic would of been coming down the management VMKernel.

On the server 2019 I did try to bond two nics but it wouldn't let me. Would i need 4 nics in total on the server then? 2 for my ISCSI to connect to my Synology and 2 to connect to vCenter?
I'm assuming I would need to do some sort of LACP/LAG on a port group?
Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 42 guests