Comprehensive data protection for all workloads
Post Reply
mkramer
Novice
Posts: 3
Liked: never
Joined: Sep 02, 2011 11:58 am
Full Name: Matt Kramer
Contact:

NetApp Storage Integration Backup

Post by mkramer »

We use NFS against a dual controller 8020 in Cluster Mode. One controller is the primary owner of aggr1 and the second controller is the primary owner of aggr2. We home the NFS IP for each vol on the controller "closest" to the disk. Thus our design looks like this...

vmk1-subnetA->node1-ip-subnetA->nfs_vol1->vol1->aggr1->node1->physical disk
vmk2-subnetB->node2-ip-subnetB->nfs_vol2->vol2->aggr2->node2->physical disk

My question is how does the Veeam Proxy server decide which IP it will use when connecting to the controller. If the vmdk's for the VM being backed up is homed on subnetB it looks as though the proxy is still connecting to the IP of subnetA to retrieve the data. So far from my testing it looks as though it is always taking the first data IP lif in the SVM on the cluster. This leads to sub-optimal IO processing when using storage integration.

When backing up a test VM over hotadd (which using the underlying vmk's) vs. NetApp storage integration I am seeing a 25% boost using hotadd vs. direct storage. The complete opposite of what one would desire!

Do I need to multi home the Veeam Proxy VM to be on subnetA and subnetB? If so will it then choose the correct "path"?

case# 00674935
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: NetApp Storage Integration Backup

Post by Gostev »

mkramer wrote:My question is how does the Veeam Proxy server decide which IP it will use when connecting to the controller.
On the storage side, adapter is selected based on speed capabilities, with selection being done between all adapters through which NFS traffic is possible. For example, in case of both 1Gb and 10Gb adaptors available, we will connect use the 10Gb one.

In cases when multiple NICs on the proxy side are able to access the identified storage adapter, we cannot control which NIC will be used, as this is up to Windows OS routing logic. I believe that Windows automatically picks the interface with a lower metric. And when metrics are the same, Windows will use binding order from Network Connection > Advanced > Advanced Settings dialog.
tkeith
Enthusiast
Posts: 32
Liked: 17 times
Joined: Jan 09, 2015 4:49 pm
Full Name: Keith Thiessen
Contact:

Re: NetApp Storage Integration Backup

Post by tkeith »

We had some concerns about how Veeam was choosing the logic interfaces (LIFS) when mounting and accessing snapshots for direct from snapshot backups ourselves.

We ran a test with a VM – which has 3 disks, one on node 7, one on node 1, and one on node2 of our NetApp cluster.

The problem is the backup selected a LIF whose home port is on node 4 when backing up all three of the above disks – this means that in all cases the data is traversing from the source node to the node the lif is hosted on before being streamed out by Veeam.

As you can imagine this will create huge bottle necks, destroy parallelism, and could heavily load both the cluster network and LIF node – potentially causing operational problems for the cluster.

To avoid this problem, selection of the LIF to use must be more intelligent – for each snapshot Veeam mounts it should be choosing a LIF who’s home port is located on the same controller node as the snapshot being mounted. That way we get full parallelism – straight out of the node into the backup infrastructure.

Is there something we can configure to ensure this will happen? The backup architecture may need to be changed if this can’t be corrected…
tkeith
Enthusiast
Posts: 32
Liked: 17 times
Joined: Jan 09, 2015 4:49 pm
Full Name: Keith Thiessen
Contact:

Re: NetApp Storage Integration Backup

Post by tkeith »

Veeam Support - Case # 00970819
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Google [Bot] and 72 guests