Host-based backup of Nutanix AHV VMs.
Post Reply
Amarokada
Service Provider
Posts: 135
Liked: 12 times
Joined: Jan 30, 2015 4:24 pm
Full Name: Rob Perry
Contact:

Optimising Nutanix and Workers

Post by Amarokada »

Hi all

All of our backup jobs have similar stats for the bottleneck report. Our Nutanix cluster is all flash and we have 10 worker nodes each with 6 cpu/6GB ram doing 4 tasks each. The cluster has 22 hypervisors so each worker runs on it's own hypervisor (we don't set this, just leave it on automatic)

09/10/2024 08:24:00 :: Load: Source 99% > Proxy 100% > Network 0% > Target 10%

Are there any recommendations on improving source and proxy speeds? Everything is on 25Gb networking.

I also have a theory that Nutanix daily incremental backups seem to be larger than other platforms, maybe because the virtual disk block sizes are bigger meaning CBT flags more of the disk has changed? Just a guess at this point.
Amarokada
Service Provider
Posts: 135
Liked: 12 times
Joined: Jan 30, 2015 4:24 pm
Full Name: Rob Perry
Contact:

Re: Optimising Nutanix and Workers

Post by Amarokada »

Just to add to this, the storage container where the VMs exist on Nutanix has compression enabled. I'm wondering if this puts a significant load of the backup cycle.
johannesk
Expert
Posts: 168
Liked: 37 times
Joined: Jan 19, 2016 1:28 pm
Full Name: Jóhannes Karl Karlsson
Contact:

Re: Optimising Nutanix and Workers

Post by johannesk »

I'm not finding clear information about how to optimize workers for Nutanix backups. Hence the Veeam best practise guide does not have information about Nutanix proxy design.
In the sizing guide
https://helpcenter.veeam.com/docs/vbahv ... #appliance
It says you should just use embedded for <1000 vm's <100 jobs. I guess then all the worker load is put on the Nutanix node running the embedded worker. And for 500 VM's you need more than the 4 tasks that is configured as default on the embedded worker.

For a use case where the setup needs 20 simultaneous tasks on Nutanix cluster with 5 nodes and 500 vm's.

1) what is the penalty of using dedicated worker as oposed to embedded? Is it more than just more IP's? Using embeded configured with 20 tasks will put all the load on the same Nutanix node I guess. With dedicated worker you could distribute the load by running dedicated worker on each node, with 4 tasks each to have 20 in total.

2) What should be considered when placing embeded/dedicated worker on a vlan? Is it important it's on the same vlan as the Veeam repository, or the same vlan as the Nutanix management?
Kochkin
Veeam Software
Posts: 79
Liked: 34 times
Joined: Sep 18, 2014 10:10 am
Full Name: Nikolai Kochkin
Contact:

Re: Optimising Nutanix and Workers

Post by Kochkin » 1 person likes this post

For the best performance, you may want to move workload to dedicated workers (do not use embedded one at all) and create dedicated workers (up to one worker per node).
That minimizes amount of data sent between nodes, so it may increase read speed in some cases. The only penalty you have is a few minutes spend to turn on the workers on a job start and a bit more CPU/RAM used on cluster.

Regarding the networking question, the data flow looks like

Code: Select all

                                                                                                      
    ┌──────────┐   ┌──────┐   ┌──────────────────┐   ┌─────────────────┐  
    │Hypervisor├──►│Worker├──►│Repository Gateway├──►│Backup repository│  
    └──────────┘   └──────┘   └──────────────────┘   └─────────────────┘  
So the hardest load is on reading by worker from hypervisor (via iSCSI). After that then data got compressed on worker and sent further. According to the bottleneck statistics, it makes sense to optimize the first arrow and also to check that you have enough CPU/RAM on the embedded worker.


++ You also mentioned compression enabled on Nutanix itself. It may affect read disk performance, but hard to tell how much -- Nutanix team should know more on this.
Post Reply

Who is online

Users browsing this forum: No registered users and 5 guests