Comprehensive data protection for all workloads
Post Reply
cerberus
Expert
Posts: 164
Liked: 17 times
Joined: Aug 28, 2015 2:45 pm
Full Name: Mirza
Contact:

Using WAN accelerators to replicate VMs, taking very long to replicate large VMs.

Post by cerberus »

Hello,

Support case #02168869

Using VBR10 to replicate a bunch of VMware VMs from site A to site B. We have 3 replication jobs setup (many-to-one WAN accelerator design).

Two of the jobs have some very large VMs (3-4 VMDK disks, 2TB each); this is our Exchange server.

The issue I am running into is, all VMs daily rate of change replicate pretty fast except the large Exchange VMs (last VMs in the job). Job takes 15hrs to run, the exchange VM taking 13 hours.

Both source and destination WAN accelerator and Proxy VMs are on SSDs (including the global cache, digest cache, etc.).

The replication jobs are setup to replicate from backup, the backup VBKs are on SSDs.
The connection link between the backup server where the VBKs are and the source WAN Accelerator and Proxy VM are connected to 10gig LAN.
The replica VMs in site B is on a storage array LUN that is barely being utilized, perf stats: https://ibb.co/f2RPCh7.
There is no throttling in place anywhere.
The backup server, source/destination WAN accelerator, and proxy VMs have AV disabled.
Destination WAN accelerator has 8 vCPU and 64GB RAM, and the destination Proxy VM has 4 vCPUs and 16GB RAM.
Pipe between Site A and Site B is 50Mbps MPLS (80ms).

Network usage snapshot:
Image

Destination Proxy perf:
Image

Destination WAN accelerator perf:
Image

Veeam Job stats:
Image
Image

I am at a loss as to what the bottleneck is, why is it taking considerably longer to replicate these large VMs.

What can I do to make this go faster?
PetrM
Veeam Software
Posts: 3626
Liked: 608 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: Using WAN accelerators to replicate VMs, taking very long to replicate large VMs.

Post by PetrM »

Hello,

According to job statistics, the bottleneck is WAN accelerator on target side. The next step is to determine the exact operation which slows down the whole process, for example to analyze debug logs of target WA service or to collect advanced performance statistics from target WAN accelerator. Anyway, I'm sure that our support team will be able to figure out the best action plan for research.

Also, keep in mind that low bandwidth mode is recommended for links slower than 100 Mbps, you may check the corresponding setting.

Thanks!
cerberus
Expert
Posts: 164
Liked: 17 times
Joined: Aug 28, 2015 2:45 pm
Full Name: Mirza
Contact:

Re: Using WAN accelerators to replicate VMs, taking very long to replicate large VMs.

Post by cerberus »

We are running in low bandwidth mode to make use of the global cache to help with the daily rate of change.
PetrM
Veeam Software
Posts: 3626
Liked: 608 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: Using WAN accelerators to replicate VMs, taking very long to replicate large VMs.

Post by PetrM »

Ok, then let's wait for the conclusion from our support team.

Thanks!
cerberus
Expert
Posts: 164
Liked: 17 times
Joined: Aug 28, 2015 2:45 pm
Full Name: Mirza
Contact:

Re: Using WAN accelerators to replicate VMs, taking very long to replicate large VMs.

Post by cerberus »

Hi PetrM,

One suggestion was to reduce global cache size of target WAN accelerator as it may be oversized for the amount of data being replicated. We may be bottlenecking the process by having an oversized cache.

What will happen to the existing cache data inside the blob.bin file when the cache size is reduced via the UI? I know we can ad-hoc increase the WAN cache accelerator, what happens internally if we try to decrease it?
PetrM
Veeam Software
Posts: 3626
Liked: 608 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: Using WAN accelerators to replicate VMs, taking very long to replicate large VMs.

Post by PetrM »

Hi Mirza,

Basically, the internal structure of blob.ini should be transparent for you, the main point is that re-population of the global cache won't be required.

Thanks!
Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 39 guests