-
- Expert
- Posts: 114
- Liked: 4 times
- Joined: Sep 02, 2010 2:23 pm
- Full Name: Steve B
- Location: Manchester, UK
- Contact:
Proxy>Proxy Network Traffic
I have three proxies, when my backup job runs it uses all three proxies to backup, no problem. When checking the network traffic on each proxy it seems that the three proxies will send their backup data to each other, rather than directly to the NAS. So, one proxy may be sending data to our NAS at 1Gb, but 30MB/sec of that data is coming from another proxy. It's as though the proxy is being used as a gateway to the CIFS share on the NAS. This is a perVM backup job so there should be no gateway. I would have thought each proxy would max out at 1Gb direct to the NAS, I've no idea why I'm seeing this random traffic between the proxies. Is this normal behaviour?
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Proxy>Proxy Network Traffic
Hello Steve,
What transport mode is used for your jobs?
If no traffic redirections are made backup data goes from proxy directly to the chosen repository.
Do you also have machines having several roles(backup server, proxy, repository)?
Thanks!
What transport mode is used for your jobs?
If no traffic redirections are made backup data goes from proxy directly to the chosen repository.
Do you also have machines having several roles(backup server, proxy, repository)?
Thanks!
-
- Expert
- Posts: 114
- Liked: 4 times
- Joined: Sep 02, 2010 2:23 pm
- Full Name: Steve B
- Location: Manchester, UK
- Contact:
Re: Proxy>Proxy Network Traffic
all the proxies use hot-add (confirmed in the job log) The three proxies are dedicated. It's worth noting that it's not always the same proxy receiving data from another, it can be the other way around and in fact in both directions. o, taking an example of just two proxies, they might be sending to each other AND both sending to the NAS at the same time.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Proxy>Proxy Network Traffic
The described routing behavior indeed looks strange.
Are all the proxies connected to the repository?
Are all the proxies connected to the repository?
-
- Veeam Software
- Posts: 21170
- Liked: 2154 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Proxy>Proxy Network Traffic
There's still a chance of traffic going through another server even in case of per-VM backup chains - when VM has several disks, all are written to repository through a single gateway.
-
- Expert
- Posts: 114
- Liked: 4 times
- Joined: Sep 02, 2010 2:23 pm
- Full Name: Steve B
- Location: Manchester, UK
- Contact:
Re: Proxy>Proxy Network Traffic
Ah OK, so if a VM has 12 disks say, and they are split 4 per proxy, one proxy would act as the gateway for that particular VM backup and all the data would go through it? Almost as though the backup job is dedicating one proxy as a gateway for each one VM.
I can understand this approach when using compression, you're leveraging the CPU power of the other proxies, network probably isn't the bottleneck. In my case I have no compression enabled, so all I really want is direct to proxy transfers. I don't know if it's possible to get around this. FYI, the target is a SOBR, three repo's on three different IPs that are actually all sat on the same NAS, I'm trying to get 3Gb throughput to it (NIC bonding wasn't reliable enough).
I can understand this approach when using compression, you're leveraging the CPU power of the other proxies, network probably isn't the bottleneck. In my case I have no compression enabled, so all I really want is direct to proxy transfers. I don't know if it's possible to get around this. FYI, the target is a SOBR, three repo's on three different IPs that are actually all sat on the same NAS, I'm trying to get 3Gb throughput to it (NIC bonding wasn't reliable enough).
-
- Veeam Software
- Posts: 21170
- Liked: 2154 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Proxy>Proxy Network Traffic
Correct.stevil wrote:Ah OK, so if a VM has 12 disks say, and they are split 4 per proxy, one proxy would act as the gateway for that particular VM backup and all the data would go through it? Almost as though the backup job is dedicating one proxy as a gateway for each one VM.
You can split your jobs and specify a single proxy explicitly for each of them to avoid unnecessary traffic. Btw, why do you disable compression?
-
- Expert
- Posts: 114
- Liked: 4 times
- Joined: Sep 02, 2010 2:23 pm
- Full Name: Steve B
- Location: Manchester, UK
- Contact:
Re: Proxy>Proxy Network Traffic
OK, that won't work really for the way we have everything configured. No compression as it's doing to a Dell DR4300 which has dedupe/compression on it already. Dell do have a source side dedupe driver that gives around 7Gbps transfer for our jobs over a 1Gb link, which is awesome, but doesn't work with veeam properly and does occasionally lock up proxies etc... Hence looking at trying to increase throughput another way.
-
- Veeam Software
- Posts: 21170
- Liked: 2154 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Proxy>Proxy Network Traffic
Makes sense in case it is a dedupe appliance, indeed. What is your current bottleneck, btw?
-
- Expert
- Posts: 114
- Liked: 4 times
- Joined: Sep 02, 2010 2:23 pm
- Full Name: Steve B
- Location: Manchester, UK
- Contact:
Re: Proxy>Proxy Network Traffic
I have 3 x 1Gb nics in the DR4300, I'm getting about 2.3Gbps I'd say at best. This is due to the inter proxy transfers hogging part of the bandwidth. I considered moving the VM's to the same host so they would go through the vswitch at 10Gb to each other, but then the host only has 1 NIC facing the nas on this VLAN! So, I have to keep the VM proxies separate to get the maximum bandwidth using 3 physical esx hosts.
The bottleneck is obviously our 1Gb network at this particular site. 10Gb is a no-go, still too expensive for this site.
The bottleneck is obviously our 1Gb network at this particular site. 10Gb is a no-go, still too expensive for this site.
Who is online
Users browsing this forum: Bing [Bot], Semrush [Bot] and 88 guests