Comprehensive data protection for all workloads
Post Reply
theflakes
Enthusiast
Posts: 33
Liked: never
Joined: Jun 09, 2010 11:16 pm
Full Name: Brian Kellogg
Contact:

Force Veeam to use backup network

Post by theflakes »

We are going to be migrating to a new VMware environment soon. We have NICs dedicated to VMotion over a 10Gb NIC. This is an isolated network. Our Veeam backup server will be connected to this network via 10Gb as well. How do we force the Veeam server to use this network to backup and not our 1Gb production network? vCenter will be a VM with a NIC on both networks. Veeam is on a physical server. We are using shared SAS for our storage with three hosts. Do we just put host entries and the vCenter server in the Veeam server that resolve the vmware host DNS names to the isolated 10Gb vmotion network?

thanks
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Force Veeam to use backup network

Post by Gostev » 1 person likes this post

Nice, your backups will fly... especially with v7.

This networking stuff is not specific to Veeam or any other application for that matter, because it happens on lower OSI level... you are right, just make sure hosts' DNS names resolve to IP addresses in 10Gb network on your backup server, and then it is up to Windows OS to use the correct NIC to reach the IP address on 10 Gb network.

One other thing to remember - since network processing mode (NBD) uses ESXi management network, remember to make 10Gb network a management network in ESXi settings.

I assume you are going to storage backups right on Veeam server? But if not, all the same stand for backup repositories as well - you want to keep them on 10Gb network, and also make sure their DNS names are resolving to IP addresses of backup repository NICs on 10Gb network.
yizhar
Service Provider
Posts: 182
Liked: 48 times
Joined: Sep 03, 2012 5:28 am
Full Name: Yizhar Hurwitz
Contact:

Re: Force Veeam to use backup network

Post by yizhar »

Hi.

You can also consider deploying a backup proxy VM that will use hot add (appliance mode) and will have direct access to the SAS storage.

Just another option to check - I'm not saying that it is better.

Yizhar
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Force Veeam to use backup network

Post by Gostev »

NBD on 10 Gb will work much faster than hot add. Both will bottleneck storage, however NBD does not have additional delays from hot add and hot remove processes, which take noticeable time.
yizhar
Service Provider
Posts: 182
Liked: 48 times
Joined: Sep 03, 2012 5:28 am
Full Name: Yizhar Hurwitz
Contact:

Re: Force Veeam to use backup network

Post by yizhar »

Gostev wrote: NBD on 10 Gb will work much faster than hot add. Both will bottleneck storage, however NBD does not have additional delays from hot add and hot remove processes, which take noticeable time.
Hi.

1. I agree with above statement.

2. But there are also advantages to hotadd mode, such as:

Faster restore (the original poster can check full VM restore time in both modes to compare).

I'm not sure which mode will work faster, as it will also depend on proxy placement, parallel processing, and target performance, and also how powerful the backup server is and if he wants to use production resources (CPU+RAM) during backup or only backup machine horse power.
I guess that in the end both methods will provide similar results.
Both methods will use esxi resources (NBD will mostly use network links, while hotadd will use CPU+RAM, and both will use disk access).
Hotadd might still be more efficient in some cases, and NBD in other.

I suggest testing both modes during full and also incremental backups, but anyway keeping a ready to use VM for hotadd can be used for faster restore when/if needed.

Yizhar
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Force Veeam to use backup network

Post by Gostev »

Interesting, I just realized that I never heard about full VM restore performance over NBD on 10Gb Ethernet. I've had so many backup performance numbers sent to me, but never restore numbers. Anyone tried to perform NBD restore over 10Gb?
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Force Veeam to use backup network

Post by tsightler » 1 person likes this post

In general, NBD mode is not a bottleneck for restores with 10Gb.
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Force Veeam to use backup network

Post by dellock6 »

Just a quick reply for the Opener: we have almost the same configuration in our datacenter, and in order to be sure backup traffic is flowing into the desired networks, all the Veeam components (proxies and repositories) are saved into Veeam by their IP addresses.

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
yizhar
Service Provider
Posts: 182
Liked: 48 times
Joined: Sep 03, 2012 5:28 am
Full Name: Yizhar Hurwitz
Contact:

Re: Force Veeam to use backup network

Post by yizhar »

Gostev wrote: Interesting, I just realized that I never heard about full VM restore performance over NBD on 10Gb Ethernet. I've had so many backup performance numbers sent to me, but never restore numbers. Anyone tried to perform NBD restore over 10Gb?
Indeed - interesting to check.

I was referring to the issue mentioned here by Luca:

http://forums.veeam.com/viewtopic.php?f ... 261#p82918

http://cormachogan.com/2013/07/18/why-i ... s-so-slow/

The issue is not only the transport protocol (NBD vs Hot/add), but also the write semantics to the VMFS datastore.
I don't know if and what difference will be, and it is mentioned by tsightler that Hotadd method also has overhead:
http://forums.veeam.com/viewtopic.php?f ... 261#p83251

So it would be interesting to check and share the results comparing whatever methods available, while checking overhead on production hosts resources, in both backup and restore.

Yizhar
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Force Veeam to use backup network

Post by dellock6 »

Thanks to that article, we are right now preparing an NFS datastore in our datacenter. The idea would be to use it for vmdk restores, and then use storage vmotion to move the restored VM into our production datastores, that are all iSCSI right now. Probably I'll be able to do some tests, but not so soon. I trust Tom, he's always right in his findings :)

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
NickKulkarni
Enthusiast
Posts: 30
Liked: 7 times
Joined: Feb 08, 2021 6:11 pm
Full Name: Nicholas Kulkarni
Contact:

Re: Force Veeam to use backup network

Post by NickKulkarni »

Gostev wrote: Jul 27, 2013 9:22 pm Nice, your backups will fly... especially with v7.

This networking stuff is not specific to Veeam or any other application for that matter, because it happens on lower OSI level... you are right, just make sure hosts' DNS names resolve to IP addresses in 10Gb network on your backup server, and then it is up to Windows OS to use the correct NIC to reach the IP address on 10 Gb network.

One other thing to remember - since network processing mode (NBD) uses ESXi management network, remember to make 10Gb network a management network in ESXi settings.

I assume you are going to storage backups right on Veeam server? But if not, all the same stand for backup repositories as well - you want to keep them on 10Gb network, and also make sure their DNS names are resolving to IP addresses of backup repository NICs on 10Gb network.
Hi just found this and have a problem making this work.

I have a problem with the fact we are using existing production servers as proxies.

There is NIC on the VM LAN for use in production and a second NIC on the backup LAN.

I have followed best practice when multihoming i.e. no default gateway or DNS on second NIC just a permanent static route from that NIC to the Router interface.

There is no traffic across the second bacup NIC during backups. Primarily, I believe, because the DNS points to the VM LAN NIC not the .

Therefore I am asking if there is a way to force Veeam to use the second NIC based on IP address of the proxies being specified not their DNS Host Names?
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Force Veeam to use backup network

Post by Gostev »

Yes, we allow specifying preferred networks in the network traffic rules.
NickKulkarni
Enthusiast
Posts: 30
Liked: 7 times
Joined: Feb 08, 2021 6:11 pm
Full Name: Nicholas Kulkarni
Contact:

Re: Force Veeam to use backup network

Post by NickKulkarni »

Case #05225248 Backup using Hot Add (NBD) and Production Network for a single job

Hi Gostev,

First up, my condolences, read your Forums Digest post on Michael White. Went on to read his blog I hope that gets archived somewhere.

I get what you are saying about preferred networks but the word is "preferred" and I assume that means if Veeam has a reason to it can and will use other networks. Am I right?

Watching my backups previously on the multihomed servers I was seeing backup traffic on the production NIC and Windows Traffic on the Backup Network.

Part of that was my predecessors Windows Networking configuration I believe. He had default gateways on both Production and Backup NICs as well as DNS servers. That as I understand it is a no-no.

Currently I have updated that situation.

Backup NIC has only Static IP and Subnet Mask in Windows Networking GUI.
Purged static DNS entries for the Backup NIC IPs as they pointed to the exact same host name as the production NIC IPs
Used Windows CLI to create a persistent static route to the Backup Network Gateway on the external Cisco Switch with a metric slightly higher than the production one but lower than others in the list.
All the research I have done says this is the way to make sure Windows routes all internet traffic through the production NIC along with everything else production.

The lack of DNS on the Backup NIC caused Veeam B & R some problems at first and a community forum post said that means that I have to put in static lookups in the Host file on each of the servers. This I have done at it seems to be working.

The joys of Cisco LAG and VLANs and how this does, or does not, work with VMWare's ESXi single default gateway are a whole other issue on the backup network between my three physical hosts that I am still working on. Still trying to get my head round how an ESXi host using a vSwitch, which I have read is not layer 2 aware at all, combined with a single default TCP/IP Stack with a single default gateway on the Management network routes out via two different LAGs on physical NICs through
two different VLANs and Default Gateways. What if it doesn't and all my traffic is actually being routed by the Cisco Switch through a single VLAN default gateway on the Management VLAN onwards to the Production and Backup VLANs.

The reason I mention that last part about gateways in particular is that despite all of the above configuration changes and having a proxy on each physical host one of my backups is still using NBD across the Production NIC of a proxy. That is if I am reading the logs and watching the traffic in Task Manager correctly during backup.

Would really like to hear what you and other experts at Veaam think about this. This is, for me at least, a very complex multilevel problem.

As I see it this is Veeam attempting to cross at least three distinct network segments (windows networking inside the server/proxy to select the NIC to use and then onward between proxies across ESXi host networking between physical hosts via Cisco Layer 3 routing between VLANs). Obviously it is something that happens everyday but I can't seem to find anything definitive on best practice for configuration of networks (layer 2 vs layer 3 and LAG vs Failover in ESXi and Cisco VLAN environments) in the datacentre for VMWare and how that impacts Veeam.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Force Veeam to use backup network

Post by tsightler » 2 people like this post

Hi Nicholas,

I think the important thing to remember is that Veeam itself actually has zero control over what layer 2 networks the traffic is actually routed over, that is 100% up to the routing of the host OS and any involved networking gear. The only thing Veeam can control via "Preferred Networks" is, if a system has multiple IP addresses, which IP address is "preferred" to establish the TCP connection. For example, if a system has two IP addresses, 10.10.10.1 and 10.10.20.1, and the "10.10.20.0/24" network is preferred, them Veeam will move that IP to the top of the list when attempting to establish connections between Veeam components, but note that, when establishing connections, Veeam always tries addresses in order, so "Preferred Network" is simply changing the order vs the randomly discovered order. However, once any given IP is selected to establish the connection between the hosts, it is entirely up to the OS and network configuration what path that traffic takes to get between the two systems.

Also important to note is that Veeam only has this control on connections between it's own components. Preferred networks have no influence on, for example, the connection to vCenter or the connection to the ESXi host for NBD traffic. These are 100% controlled by the IP or hostname resolution for the host. If you have an ESXi host named "esxi01" and it has two IP addresses on two different networks, 10.10.10.2 and 10.10.20.2, then the connection will be made to whichever IP address "esxi01" resolves to and the route to get to that address is determined by the OS and network gear, it's really as simple as that.

Once you understand the layer 3 stuff above, i.e. how the components determine what IP addresses to use to make the connections between them, then you have to look at your routing and network configuration to determine what layer 2 network will actually be used to pass this traffic. While it may seem like a complex problem, it's really pretty simple if you break it down to the network components. Other than that single "preferred network" setting in Veeam, which, as mentioned above, does nothing other than control the order in which IP addresses are tried by Veeam, everything else is just standard networking 101 and you can use your normal tools like ping/tracert/arp and tools for your specific network gear to determine the physical path the traffic will take.
NickKulkarni
Enthusiast
Posts: 30
Liked: 7 times
Joined: Feb 08, 2021 6:11 pm
Full Name: Nicholas Kulkarni
Contact:

Re: Force Veeam to use backup network

Post by NickKulkarni »

Hi tsightler,
Thanks for getting back to me.
... Veeam itself actually has zero control over what layer 2 networks the traffic is actually routed over, that is 100% up to the routing of the host OS and any involved networking gear...Preferred networks have no influence on, for example, the connection to vCenter or the connection to the ESXi host for NBD traffic. These are 100% controlled by the IP or hostname resolution for the host...everything else is just standard networking 101 and you can use your normal tools like ping/tracert/arp and tools for your specific network gear to determine the physical path the traffic will take.
Thanks for clarifying what I suspected i.e. that Veeam has no control over networking other than expressing a preference for a particular IP range. This makes sense.

I also realise that all of this is Guest and Host OS configuration creating links to Cisco and VMWare network stuff and you are most likely going to tell me that this isn't the forum to be discussing this but if you do I think you will be missing the point.

As you so clearly point out, Veeam is totally at the mercy of these underlying networking facts and can do nothing other than suggest a preferred network. If customers don't get the underlying Guest OS networking, Domain DNS and physical networking VLAN structure correct you may be seeing problems in Veeam that have nothing to do with Veeam and have everything to do with what you call "Networking 101".

Please let me explain my thinking with my own case as an example.

For example, currently Veeam is not happy with my multihomed proxies and is confusing a single managed server , identified by it's backup network IP with the DNS name of the same proxy (which is on my VM Production Network on a different subnet) and declaring a Bios UUID conflict and defaulting to NBD instead of Hot Add for that proxy server. The proxies are multihomed because they are dual role, i.e. they are production servers that double as proxies in the evening for backup.

I am testing now, in conjunction with support, to see if this is because I added the Managed Server by IP address of its backup NIC instead of its DNS name. This was done because Veeam didn't seem to understand using the Prefered Network when the DNS name resolved to the Production network NIC IP.

Whilst this may be Networking 101 it isn't exactly simple to troubleshoot. I am having trouble explaining to Veeam support that the UUID conflict is possibly non existent and that the VM Host Name and Backup IP address are one and the same VM. If support can't understand this Veeam behaviour, then maybe Networking 101 isn't that simple and in that case what hope have us customers got?

It really is interesting that you call this simple.
While it may seem like a complex problem, it's really pretty simple if you break it down to the network components.


Often IT professionals inherit, from their predecessors and previous outsourced support vendors, some very complex networks that they had nothing to do with building. We can't tear down and recreate without serious effort and concomitant expensive down time. I have been troubleshooting Veeam Backup issues since I arrived here two years ago. The recent upgrade to Version 11a crashed my Surebackup jobs and I ended up having to manually recreate my virtual lab because of multiple network complexity not being handled by Veeam's wizard.

VMWare ESXi and Guest OS network configuration in regard to physical switching and vlan topology and Layer 2 vs Layer 3 routing isn't exactly "simple" to me and I suspect it is the same for lots of other people too.

I think it gets complex when you realise that, at the ESXi level, vswitches aren't Layer 2 aware at all. Add in that ESXi networking is, in reality, limited to a single default TCP/IP Stack with a single default gateway for anything other than defined traffic for internal ESXi services (vMotion,Provisioning etc.) and you start seeing what I am trying to say. Once you start introducing LAG and VLAN in the physical layer (physical NIC and external switches) then the complexity of interaction and consequence grows.

I inherited a Production Physical Network between two physical ESXi Hosts. This is across a pair of four NIC LAGs on external Cisco VLAN with its own default gateway using IP Route Hashing at the ESXi layer. It also has a Backup physical network across three servers (two ESXi Hosts and a Backup NAS server) on triple two NIC LAGs using IP Route Hash and ISCSI to NFS NAS. All of these go into a single stack of Cisco Switches. Is that simple?
...everything else is just standard networking 101 and you can use your normal tools like ping/tracert/arp and tools for your specific network gear to determine the physical path the traffic will take
Because ESXi and inter vlan routing at the Cisco level is transparent to the Guest OS traceroute, ping and pathping don't actually show anything of any real use. ARP tables on the Cisco Switches are Layer 2 and I believe get ignored by ESXi anyway so I am not sure how that will help but I will certainly give that a look.

Given Layer 2 at the ESXi level doesn't exist and Guest OS Servers are Multihomed on two different IP subnets with two different external VLAN default gateways (neither of which is on the ESXi default TCP/IP stack which is on the Management IP Subnet by default) can you tell me if the packets are all getting inter vlan routed via the ESXi management default gateway by the switches? I would love you to tell me I am an idiot and there is a simple answer which is X.

Until then I will unfortunately be stuck trying to explain to Veeam support that Veeam doesn't understand that IP address X and Host Name Y are the same VM and that there is no Bios UUID conflict.

This is why I believe that Veeam does need to look deeply into its interaction with the underlying Network and provide some guidance to us ignorant consumers about what it does to Veeam and possibly the same to their support staff because, and I am including myself here, us consumers are very good at complicating what should be simple and mucking up what should be straightforward and impacting what worked very well in testing in the lab.

Can't remember who said it but the quote is "I never saw a plan of engagement that survived first contact with the enemy"
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Google [Bot], Semrush [Bot] and 65 guests