Host-based backup of VMware vSphere VMs.
Post Reply
trackaddict799
Novice
Posts: 8
Liked: never
Joined: Feb 08, 2017 6:51 pm
Contact:

Slow replication within local cluster

Post by trackaddict799 »

New Veeam 9.5 setup on NetApp FlexPod cluster running ESXi6.0
Datastores are NFS.
Enterprise license only so not using integrated NetApp storage goodies.
Simple deployment "all-in-one" VBR server for now.
Intending to replicate several SQL servers (~1TB each) within same ESXi cluster for a Virtual Lab.

My first test replication of a test VM of 60GB ran at a slow ~6MB/s rate.
These larger 1TB plus VM's will take forever at that rate.

I'm trying different Storage Optimization settings within the advanced job config (Local, LAN, etc) which affect blocksize in hopes that makes a difference.
Also looking at Network traffic rules to force it to use the NFS subnet, different from management subnet.

I believe it's using Direct NFS method but not sure how to verify that.

Any help or pointers would be great.
Or specific tuning for NetApp NFS if that's the place to start.
Vitaliy S.
VP, Product Management
Posts: 27371
Liked: 2799 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Slow replication within local cluster

Post by Vitaliy S. »

Hi,

What's your bottleneck stats for the job run with low performance rates? And what's the backup mode used? You can check it in the job session in the backup console.

Thanks!
trackaddict799
Novice
Posts: 8
Liked: never
Joined: Feb 08, 2017 6:51 pm
Contact:

Re: Slow replication within local cluster

Post by trackaddict799 »

Stats below, most the time they are similar showing 99% target. Once it was 99% source. So either way it seems to be disk.
How do I confirm the mode used? I selected Direct Storage Access for transport mode in the Proxy.
Single Proxy since it's 1 local 6 host ESXi Cluster.

Job started at 2/23/2017 5:58:59 PM
Building VMs list 0:00:01
VM size: 60.0 GB (35.8 GB used)
Changed block tracking is enabled
Processing CIN-VBRM-01 1:20:30
All VMs have been queued for processing
Load: Source 0% > Proxy 3% > Network 1% > Target 99%
Primary bottleneck: Target
Job finished at 2/23/2017 7:19:45 PM
Vitaliy S.
VP, Product Management
Posts: 27371
Liked: 2799 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Slow replication within local cluster

Post by Vitaliy S. »

This information can be taken from the corresponding job session statistics, look for special [san], [hot-add], [nbd] metrics. As regards the bottleneck stats, it says that the issue is in how data is written to the target storage, source is ok!

What's the target storage? What performance do you get when uploading data to that storage via vSphere Client Datastore Browser?
trackaddict799
Novice
Posts: 8
Liked: never
Joined: Feb 08, 2017 6:51 pm
Contact:

Re: Slow replication within local cluster

Post by trackaddict799 »

Target and source storage are both NetApp FAS 3220 with SAS 1.2TB disks.


Uploaded a 2GB ISO file thru vSphere browser in just over 2min.
Quick math....2000MB / 120s = 16MB/s

So I can consider that my top end, obviously Veeam adds a few hops and overhead.

Little confused over Proxy requirements for a local site replication too.
Do I need 2, 1 for source host and 1 for target host, even if it's the same 6 host ESXi Cluster?
Right now I just have the 1 Proxy.
trackaddict799
Novice
Posts: 8
Liked: never
Joined: Feb 08, 2017 6:51 pm
Contact:

Re: Slow replication within local cluster

Post by trackaddict799 »

Saw in the Job details it was using "nbd" mode.
Changed Proxy from Direct to Automatic, re-ran the job and it used "HotAdd" with a much better rate of 42MB/s

That tells me it was not able to use Direct NFS and then defaulted to NBD mode for those slow replications.

While HotAdd mode is much better at 42MB/s, I'd like to compare that to Direct NFS if I can get it working.
Are there specific permissions from the NetApp side I need to give the Proxy? I was thinking if ESXi cluster already has read/write/root access to these NFS datastores it would be enough.
I read somewhere the Proxy VM itself needs perms to the NFS luns from NetApp, does that sound right?
Vitaliy S.
VP, Product Management
Posts: 27371
Liked: 2799 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Slow replication within local cluster

Post by Vitaliy S. »

trackaddict799 wrote:Do I need 2, 1 for source host and 1 for target host, even if it's the same 6 host ESXi Cluster?
If replication is done via LAN, then only 1 proxy server is enough. The second one is needed, when you're replicating over the WAN and need to process the "rebuild" traffic locally within the remote site.
trackaddict799 wrote:Saw in the Job details it was using "nbd" mode.
Changed Proxy from Direct to Automatic, re-ran the job and it used "HotAdd" with a much better rate of 42MB/s

That tells me it was not able to use Direct NFS and then defaulted to NBD mode for those slow replications.
Yes, if you remove the checkbox saying to failover to NBD when direct SAN access is not available, then the replication job will fail.
trackaddict799 wrote:While HotAdd mode is much better at 42MB/s, I'd like to compare that to Direct NFS if I can get it working.
Are there specific permissions from the NetApp side I need to give the Proxy? I was thinking if ESXi cluster already has read/write/root access to these NFS datastores it would be enough.
I read somewhere the Proxy VM itself needs perms to the NFS luns from NetApp, does that sound right?
Yes, sounds correct, but I would suggest to re-check all the requirements once again > Backup Proxy for Direct NFS Access Mode

Thanks!
Post Reply

Who is online

Users browsing this forum: No registered users and 69 guests