Comprehensive data protection for all workloads
Post Reply
bretth
Novice
Posts: 7
Liked: 5 times
Joined: Jul 27, 2022 5:31 am
Contact:

Network comms between proxy and repository

Post by bretth » 2 people like this post

Hi,
I am looking at putting an immutable Linux repository on a separate network/VLAN to limit traffic to it. What communications occur between the proxies and repository? What protocols and ports?

Physical VBR server, physical Ubuntu 20.04 repo, vSphere/ESX 7, one proxy VM on each ESX host. Will be going to ESX 8 soon, if that changes things.
Mildur
Product Manager
Posts: 8923
Liked: 2360 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Network comms between proxy and repository

Post by Mildur » 1 person likes this post

Hi Bretth

All required ports are documented in our userguide.
A hardened repository is listed as a Linux Backup repository: https://helpcenter.veeam.com/docs/backu ... positories

Please let me know if something is not clear.

Best,
Fabian
Product Management Analyst @ Veeam Software
Andreas Neufert
VP, Product Management
Posts: 6779
Liked: 1421 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Network comms between proxy and repository

Post by Andreas Neufert » 2 people like this post

If you use a firewall ensure that the firewall can keep up with the needed traffic speed for backup.
bretth
Novice
Posts: 7
Liked: 5 times
Joined: Jul 27, 2022 5:31 am
Contact:

Re: Network comms between proxy and repository

Post by bretth »

Thanks Fabian, I managed to miss that.
Hi Andreas, yes we're still looking at the best way to achieve this.
bretth
Novice
Posts: 7
Liked: 5 times
Joined: Jul 27, 2022 5:31 am
Contact:

Re: Network comms between proxy and repository

Post by bretth » 1 person likes this post

Second silly question, re this note about the port 2500-3300 requirement: "Note: This range of ports applies to newly installed Veeam Backup & Replication starting from version 10.0, without upgrade from previous versions. If you have upgraded from an earlier version of the product, the range of ports from 2500 to 5000 applies to the already added components."

This is for an in-place upgrade, not to a new server via configuration restore? I've moved from 9.5 to 10, 11a and now 12.1, but each time has been by installing a new server and restoring the configuration, so I will need ports 2500-3300 rather than 2500-5000, right?
tyler.jurgens
Veeam Legend
Posts: 305
Liked: 145 times
Joined: Apr 11, 2023 1:18 pm
Full Name: Tyler Jurgens
Contact:

Re: Network comms between proxy and repository

Post by tyler.jurgens » 1 person likes this post

If you've performed a configuration restore, you still need port 2500-5000 as it has the same settings from previously.
Tyler Jurgens
Veeam Legend x2 | vExpert ** | VMCE | VCP 2020 | Tanzu Vanguard | VUG Canada Leader | VMUG Calgary Leader
Blog: https://explosive.cloud
Twitter: @Tyler_Jurgens BlueSky: @tylerjurgens.bsky.social
bretth
Novice
Posts: 7
Liked: 5 times
Joined: Jul 27, 2022 5:31 am
Contact:

Re: Network comms between proxy and repository

Post by bretth » 1 person likes this post

Thanks Tyler. Not what I hoped, but good to know.
NickKulkarni
Enthusiast
Posts: 30
Liked: 7 times
Joined: Feb 08, 2021 6:11 pm
Full Name: Nicholas Kulkarni
Contact:

Re: Network comms between proxy and repository

Post by NickKulkarni »

It may be a bigger project than you expect.

I have a similar setup, multiple ESXi hosts, one VM Linux Proxy on each. Seperate vNIC and vSwitch for backup on each and also on the VM based Veeam B&R server which resides in one of the hosts (not best practice I know) and feeds a pair of external NAS repositories via ISCSI on the VM itself not via ESXi. I inherited this when I took over but it wasnt working well and needed tweaking.

My predecessor had put in seperate vNic and vSwitch into the Veeam Server and Windows Proxies on one host backups were clunky, slow and frequently generated error messages about lost connectivity to proxy or ESXi hosts. I was seeing saturation of the network during backups despite the supposedly seperate backup network. Throttling the network didn't help much with the errors other than to slow down the backups slightly. Did a lot of reading and support tickets over the years.

Veeam B&R relies on what Windows is set up to do on the network. You can add the backup network to the table and set prefer for backup and throttle it too. Doesn't do what you expect always. It does not throttle network ISCSI traffic to a repository, it sees ISCSI extents as local disks. Running a health check will max out your NIC. If that NIC is also talking to the other proxies and hosts you get a timeout or disconnection error message.

Windows if you set up the second vNIC via the GUI and give it same settings as the primary NIC i.e. a default gateway and register in DNS it will muck things up. You will get multiple entries with the two IP addresses on the same DNS name. Windows will choose to send packets out via either interface based on how it feels at the time. Veeam relies on DNS resolution in Windows so this will be unpredictable. This needs to be fixed first.

The answer is to remove DNS registration on the second vNIC and the default gateway and do not register in DNS or Netbios. Instead configure a static route with a higher metric using Route from the command line. I then had to edit the Windows Host file to add the DNS name for the Windows based Proxies to the Backup Network IP addresses on the Veeam server or it would use the DNS resolved VM Production LAN IP for them during jobs. This helped but tests using Ping and PathPing showed very slow resolution of the first hop outside the ESXi host to the backup VLAN subnet. Complicated next bit as you are dealing with routing between VLAN in the Cisco switch as well as internally in the ESXi host from the VM.

ESXi has a single default network stack created at install on the Management Network interface. Veeam I believe has no choice but to use it. Adding vSwitches to the backup network does not change the default gateway. The answer is to add a custom TCP/IP Stack for the backup gateway then create a VMKernel and assign it to the backup vSwitch and assign the custom TCP/IP stack to it. This will give the ESXi host a static IP on the backup network you can ping from the outside as well as the Management IP. If you can't see that from the outside it isn't working check your config. Traffic from the ESXi host will now route out via the Physical NIC(s) assigned to the backup vSwitch.

That step above is vital or you will have very slow resolution of the first hop between the ESXi host and the external switch VLAN. Remember ESXi vSwitches are Layer 2 devices, they do not understand IP level routing. Without the custom TCP/IP stack traffic will leave the ESXi host on the default gateway, hit the switch which will route the second VLAN and then have the data come back back the same way. If you are using a LAG to connect the Backup network on the ESXi host to the external physical switch be careful how you assign the routing; Route Based on IP Hash if I recall is the only one that works. Also the external LAG has to be static not dynamic.

Although text book says LAG creates resiliency and doesn't add bandwidth that isn't set in stone for some repositories. Monitoring one of our pair of repositories shows that TrueNase (previously FreeNas) with a two NIC LAG will route upload on one NIC and download on the other. Cisco and ESXi are not so clever and monitoring the ESXi host shows traffic during backup is primarily across a single NIC. There is on final Gotcha and it got me too. Gostev pointed us to an undocumented but acknowledged bug in VMWare ESXi, any network handled via a VMKernel is throttled by ESXi to around 45% of NIC capacity. There is no workaround for that. I therefore throttled the network to just under that cap for safety. It has helped a lot with latency.

Hope this helps.
Post Reply

Who is online

Users browsing this forum: billy.tsang, Google [Bot], JustBackupSomething and 112 guests