I am not a representative of Veeam. I am a user. No warranty, no guarantees. YMMV.
Preamble
I'm almost positive this configuration is not officially supported by the Veeam team, but I wanted to share the steps for our environment of Nutanix clusters and AHV proxies. These are not the steps to do this, I'm sure there's other methods. I simply found this (after much experimentation/failure) to be the fastest and cleanest method in our environment.
We have about 5 AHV clusters. Each of our AHV proxies are configured similarly - a proxy with one NIC in the "backup management" subnet/VLAN which has firewall rules permitted to the Veeam infrastructure, and a second NIC on the same subnet/VLAN as the Nutanix cluster (AHV hosts, CVMs, data services/iscsi IP). Additionally all of our Nutanix clusters have a certificate applied which is issued from our enterprise ADCS PKI for improved security.
Obvious advantages of this configuration - less firewall rules required when one NIC is in the same subnet as the Nutanix cluster, no need to "hop" gateways when transferring large quantities of data from the AOS storage, and better security.
Unfortunately as of writing, this is still not an officially supported workflow, so I provide the steps below that worked for me when deploying v4 of the AHV proxy.
I will assume Nutanix network segmentation is not in use, as we don't take advantage of that (yet).
Steps
- Add your Nutanix AHV cluster to the VBR console, nothing new here.
- Start the wizard for deploying a new AHV backup proxy.
- When configuring the network for the proxy, setup for the "management" interface (DNS resolution, NTP sync, updates, gateway/forwarding, SSH, web interface).
- On the apply page, just wait it out. It will look like nothing is happening and that's mostly correct. For me, it takes between 15-20 minutes of waiting for the system to exit with a warning similar to "Nutanix AHV proxy has been deployed with warnings". At this time the proxy is not usable and if you try to access the web interface, you are likely to get an "ajax" error. Close the wizard.
- From your Prism UI, gracefully shutdown the proxy VM. Add a second NIC and connect it to the same VLAN as your Nutanix AHV hosts/CVMs. Power on the proxy VM.
- Optional - Activate SSH If you're like me, you want SSH immediately to avoid working with the laggy console. After the VM has booted, login with your configured username/password. Execute the command The SSH service is already enabled and running, all you need is to permit SSH access in the firewall.
Code: Select all
sudo ufw allow ssh
- Execute the command You should now see the new interface in a DOWN state. For me, it has always been ens4 but YMMV.
Code: Select all
ip link
- Configure netplan for your new NIC. I've found the easiest method is to use a command similar to: Adjust the link (ens4) and CIDR-formatted address (198.51.100.1/24) as necessary to your environment.
Code: Select all
sudo netplan set "ethernets.ens4.addresses=[198.51.100.1/24]"
- Execute the command Use the commands ip addr and ip route to confirm your networking has updated.
Code: Select all
sudo netplan apply
- Optional If you are using custom certificates for your Nutanix clusters, I strongly advise to follow KB4433 to install your root CA's certificate.
- Execute the command... ...to confirm connectivity via TLS to your cluster from the AHV proxy, substituting in the FQDN or vIP address of your Nutanix cluster. You should see messaging that the connection was established and if your PKI is setup, you should have a line in the output showing "Verification: OK". Exit the s_client with CTRL+D. If this test does not succeed, you likely made a mistake in your networking.
Code: Select all
openssl s_client -brief NUTANIX_FQDN_OR_IP:9440
- Return to the VBR console. Browse to backup proxies, right click your previously added proxy and click remove. DO NOT delete the proxy VM from the cluster. We only need to remove the proxy from the console.
- In the VBR console, add a proxy and select the AHV type. Select the option to connect to an existing proxy and follow all your normal steps, connecting to the existing cluster and AHV proxy already present. Getting a certificate warning for the AHV proxy itself is normal and not unexpected. The apply page should complete very quickly this time (< minute) due to everything being configured now on the AHV proxy.
- I'm not exactly sure why, but I have better success at this stage if I reboot the AHV proxy a final time before trying to use it/access the management console.
- Perform any other configurations and testing (NTP, updates, email notifications, backup, VM restore, FLR, etc).
- Enjoy your proxy!