Host-based backup of VMware vSphere VMs.
Post Reply
lowlander
Service Provider
Posts: 450
Liked: 30 times
Joined: Dec 28, 2014 11:48 am
Location: The Netherlands
Contact:

NIC teaming / backup throughput / restore throughput

Post by lowlander »

Hi,

we setup a proxy/repsository server based on Windows 2016. We use 8 network adapters in teamingmode ( dynamic load balancing / LACP ). On the switch side we configured an aggregate and LACP is in place on that side also.

Backups are running perfectly using 800 MBps for throughput. That is, incoming streams from the hypervisor layer (VMware) to the repository is filling the available bandwidth.

However when we run a restore of a full virtual machine to the hypervisor layer, we only see 1 Gbps bein1g used instead of all network adapters 8 Gbps. We only see 1 veeamagent.exe session being active sending to the hypervisor (ESXi host). Taking that in notice I believe this is the reason that we are not achieving higher bandwidth utilitization.

Is the restore performance based on a windows network team expected to flow over 1 network adapter ? Can we improve restore speed ?

Thanks !
Andreas Neufert
VP, Product Management
Posts: 6707
Liked: 1401 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: NIC teaming / backup throughput / restore throughput

Post by Andreas Neufert »

Can you pleas share which restore mode you use? NBD? HotAdd?
lowlander
Service Provider
Posts: 450
Liked: 30 times
Joined: Dec 28, 2014 11:48 am
Location: The Netherlands
Contact:

Re: NIC teaming / backup throughput / restore throughput

Post by lowlander »

Hi,

Proxy is configured auto. But we tried NBD, SAN and hotadd. Data has to be transported from the proxy/repo server to the hypervisor (vmware host). Looks like traffic wil always be transported over LAN.
lowlander
Service Provider
Posts: 450
Liked: 30 times
Joined: Dec 28, 2014 11:48 am
Location: The Netherlands
Contact:

Re: NIC teaming / backup throughput / restore throughput

Post by lowlander »

as an addendum ;)

I like the fact having four physical proxy servers dedicated as proxy and repository. When using hotadd we need to implement a virtual machine as proxy server. I understand the benefit of a virtual proxy. The fact the transport services between the repository and virtual machine can setup multiple connection will optimize the throughput on the 800 Mbps link. Maybe we should consider using a 10 Gbps instead of 8x 1 Gbps ;) This wil prevent us from the need for a virtual "restore" proxy.

thanks !
Andreas Neufert
VP, Product Management
Posts: 6707
Liked: 1401 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: NIC teaming / backup throughput / restore throughput

Post by Andreas Neufert »

Thanks for the additional explanations.

As the Proxy Server is physical you always use NBD mode for Backup and restore. This Backup mode is somehow restricted from VMware side in case of throughput (it reserves availability for VMware internal tasks at the interface like DRS traffic and vMotion).

At Backup you run multiple Backup Jobs in parallel and that will work well with the teaming. If I understand you teaming method well, a single connection (at VM restore) would just run through one of your NICs. So a 10GbE connection would help, but the throughput would be somehow limited by the NBD mode itself.

In you situation it would be helpful to setup a virtual proxy. The Repository would send compressed data to the Proxy (this allone could increase speed 2x) and you write without the restrictions of the VMware Management Interface at NBD mode. HotAdd use as well asynconous write, which help the storage system to work more optimal.
Regnor
Veeam Software
Posts: 934
Liked: 287 times
Joined: Jan 31, 2011 11:17 am
Full Name: Max
Contact:

Re: NIC teaming / backup throughput / restore throughput

Post by Regnor »

A single network connection will always use only one interface. Teaming can put different network sessions on different interfaces, so in real you'll always have 8 x 1Gbps instead of 1 x 8Gbps interface.
A restore operation creates a single network session between repository and the proxy. Veeam would have to use multiple threads/agents in order to utilize more than 1 NIC; but this would also depend on the load balancing algorithm.
Post Reply

Who is online

Users browsing this forum: Google [Bot], karsten123 and 73 guests