-
- Influencer
- Posts: 12
- Liked: never
- Joined: Jun 18, 2020 12:23 am
- Full Name: SC
- Contact:
WAN Accelerator resource utilization
We're trying to copy a ~7TB data between two sites for retention, and deployed a pair of WAN accelerators hoping to speed things up a bit. Both the B&R core and wan accelerators (WA) are on individual VM's. Both source and target WA's have 4 CPUs on 500GB SSD.
My question is, the source WA is showing only about 20% CPU utilization with occasional 90% burst. Is this the normal behavior, or is there a way to make utilization higher?
The bottleneck breakdown also shows something strange with target WAN constantly at 0%.
Source 5%
Source WAN 92%
Network 98%
Target WAN 0%
Target 81%
Suggestion on how to optimize this is much appreciated.
Stephen
My question is, the source WA is showing only about 20% CPU utilization with occasional 90% burst. Is this the normal behavior, or is there a way to make utilization higher?
The bottleneck breakdown also shows something strange with target WAN constantly at 0%.
Source 5%
Source WAN 92%
Network 98%
Target WAN 0%
Target 81%
Suggestion on how to optimize this is much appreciated.
Stephen
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: WAN Accelerator resource utilization
Hi Stephen, what kind of link is involved and what bandwidth mode is configured on both WAN accelerators?
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Jun 18, 2020 12:23 am
- Full Name: SC
- Contact:
Re: WAN Accelerator resource utilization
10G VPN tunnel. But our network admin did traffic shaping to bring it down to a percentage of that for backup traffic.
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: WAN Accelerator resource utilization
Hello,
By the way, what is the processing rate shown in job statistics and how far it is from what you're trying to achieve?
The Network is the bottleneck ( 98 %) according to statistics provided so it makes sense to test bandwidth with iPerf and to compare
the results of test with the job processing rate.
The low-bandwidth mode is fine for links slower than 100 Mbps and high-bandwidth mode is recommended for WAN connections faster than 100 Mbps.
Please note that the direct data transfer without WAN accelerators will be still preferable for connections faster than 1 Gbps.
Thanks!
By the way, what is the processing rate shown in job statistics and how far it is from what you're trying to achieve?
The Network is the bottleneck ( 98 %) according to statistics provided so it makes sense to test bandwidth with iPerf and to compare
the results of test with the job processing rate.
The low-bandwidth mode is fine for links slower than 100 Mbps and high-bandwidth mode is recommended for WAN connections faster than 100 Mbps.
Please note that the direct data transfer without WAN accelerators will be still preferable for connections faster than 1 Gbps.
Thanks!
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Jun 18, 2020 12:23 am
- Full Name: SC
- Contact:
Re: WAN Accelerator resource utilization
Processing rate is 38 MB/s, and we are running high-bandwidth mode on the accelerators.
Don't have a target number in mind, just curious if we are gaining anything with the accelerators. I was hoping the WAN Accelerators would reduce the transfer size more than it is doing right now.
Thanks for the responses.
Don't have a target number in mind, just curious if we are gaining anything with the accelerators. I was hoping the WAN Accelerators would reduce the transfer size more than it is doing right now.
Thanks for the responses.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: WAN Accelerator resource utilization
I'd check with your network admin regarding the exact allocated bandwidth and maybe try low bandwidth mode based on that.
Who is online
Users browsing this forum: bertdhont, looney_pantz, Majestic-12 [Bot], Semrush [Bot] and 286 guests