-
- Influencer
- Posts: 24
- Liked: 2 times
- Joined: Feb 18, 2020 5:45 pm
- Full Name: Kevin Chubb
- Contact:
Backup job performance
I have a Windows VM with ~4.5 TB used space. I cancelled the initial backup job because it was taking so long, so I'm looking for a way to increase performance.
Standalone physical Windows Server 2016 Veeam B&R server hosting the backup proxy and backup repository
Veeam B&R 10A
VMware vCenter Server Appliance 6.7, ESXi 6.7
Cisco Nexus 9K switches
Cisco UCS B200 M5 hosts
NetApp AFF A200 storage nodes w/ iSCSI datastores
Before making any changes I first ran a VeeamZIP job on a different VM as a performance reference point.
I then tried creating a NIC team on the Veeam server. I ran 2 ethernet cables from the Veeam server NICs to a switch which has 2 ports configured in a port channel. The NIC team teaming mode is Static Teaming and load balancing mode is Dynamic.
I ran another VeeamZIP job on the same reference VM and there's almost no performance increase. The load was balanced across NICs though.
Two questions...
Is there more that I need to know about NIC teams and possibly the way that Veeam will utilize them?
Am I going down the wrong path completely and should I be looking into something rather than NIC teaming? We do not have Veeam licensing for array-based backups.
Standalone physical Windows Server 2016 Veeam B&R server hosting the backup proxy and backup repository
Veeam B&R 10A
VMware vCenter Server Appliance 6.7, ESXi 6.7
Cisco Nexus 9K switches
Cisco UCS B200 M5 hosts
NetApp AFF A200 storage nodes w/ iSCSI datastores
Before making any changes I first ran a VeeamZIP job on a different VM as a performance reference point.
I then tried creating a NIC team on the Veeam server. I ran 2 ethernet cables from the Veeam server NICs to a switch which has 2 ports configured in a port channel. The NIC team teaming mode is Static Teaming and load balancing mode is Dynamic.
I ran another VeeamZIP job on the same reference VM and there's almost no performance increase. The load was balanced across NICs though.
Two questions...
Is there more that I need to know about NIC teams and possibly the way that Veeam will utilize them?
Am I going down the wrong path completely and should I be looking into something rather than NIC teaming? We do not have Veeam licensing for array-based backups.
-
- Veeam Software
- Posts: 3624
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: Backup job performance
Hi Kevin,
Basically, NIC teaming is transparent for Veeam but might accelerate data processing speed in some cases however I'm wondering how did you come to the conclusion that NIC-teaming would help to increase performance in your particular case?
What is the actual processing rate and what processing rate are you trying to reach? Where is the "bottleneck" according to job statistics and which transport mode do you use?
Thanks!
Basically, NIC teaming is transparent for Veeam but might accelerate data processing speed in some cases however I'm wondering how did you come to the conclusion that NIC-teaming would help to increase performance in your particular case?
What is the actual processing rate and what processing rate are you trying to reach? Where is the "bottleneck" according to job statistics and which transport mode do you use?
Thanks!
-
- Influencer
- Posts: 24
- Liked: 2 times
- Joined: Feb 18, 2020 5:45 pm
- Full Name: Kevin Chubb
- Contact:
Re: Backup job performance
I tried a NIC team because the Veeam server has four NICs and only had one connected. Basically it was easy to build a NIC team and see what impact it had.
The processing rate was 96 MB/s without the NIC team and 98 MB/s with it. I don't have a specific rate that I'm trying to reach but 200-400 MB/s seems reasonable. The bottleneck is "Source" and transport mode is "Automatic selection" (which selects "Network" since we do not have "Direct storage access" licensing and there is no "Virtual appliance" proxy VM).
The processing rate was 96 MB/s without the NIC team and 98 MB/s with it. I don't have a specific rate that I'm trying to reach but 200-400 MB/s seems reasonable. The bottleneck is "Source" and transport mode is "Automatic selection" (which selects "Network" since we do not have "Direct storage access" licensing and there is no "Virtual appliance" proxy VM).
-
- Product Manager
- Posts: 2579
- Liked: 708 times
- Joined: Jun 14, 2013 9:30 am
- Full Name: Egor Yakovlev
- Location: Prague, Czech Republic
- Contact:
Re: Backup job performance
Hi Kevin.
- Direct SAN Access is a transport mode that does not require special license type from our side - you can still have Direct SAN Access to NetApp datastores without our special Storage Snapshots integration. Feel free to test.
- It is worth giving a shot for VA Proxy test. You don't have to spare additional server for it and can utilize existing VM as a proxy. Sometimes few clicks to add a VA Proxy can give a several times faster backup performance boost.
- Please check your esxi management interface load during backup. I have a feeling it might be the bottleneck.
/Thanks!
- Direct SAN Access is a transport mode that does not require special license type from our side - you can still have Direct SAN Access to NetApp datastores without our special Storage Snapshots integration. Feel free to test.
- It is worth giving a shot for VA Proxy test. You don't have to spare additional server for it and can utilize existing VM as a proxy. Sometimes few clicks to add a VA Proxy can give a several times faster backup performance boost.
- Please check your esxi management interface load during backup. I have a feeling it might be the bottleneck.
/Thanks!
-
- Influencer
- Posts: 24
- Liked: 2 times
- Joined: Feb 18, 2020 5:45 pm
- Full Name: Kevin Chubb
- Contact:
Re: Backup job performance
All things being equal, would Direct SAN Access typically give a better performance increase than a proxy VM?
-
- Influencer
- Posts: 24
- Liked: 2 times
- Joined: Feb 18, 2020 5:45 pm
- Full Name: Kevin Chubb
- Contact:
Re: Backup job performance
Also, if I use Direct SAN Access or a proxy VM then that would bypass using the ESXi management interface?
-
- Product Manager
- Posts: 2579
- Liked: 708 times
- Joined: Jun 14, 2013 9:30 am
- Full Name: Egor Yakovlev
- Location: Prague, Czech Republic
- Contact:
Re: Backup job performance
- Direct SAN Access is typically fastest of them all
- And yes, Direct SAN Access transport mode reads blocks directly from the storage.
/Cheers!
- And yes, Direct SAN Access transport mode reads blocks directly from the storage.
/Cheers!
-
- VeeaMVP
- Posts: 1007
- Liked: 314 times
- Joined: Jan 31, 2011 11:17 am
- Full Name: Max
- Contact:
Re: Backup job performance
For a single network connection (backup of 1 VM) you won't see much or any performance improvement when using a NIC team; it will always use only 1 NIC.
In general if you backup multiple VMs from multiple hosts the load will/can be balanced over multiple NICs.
If you have multiple virtual proxies you can further benefit depending on how your switches do load balancing.
So for your case; the Direct SAN Access mode or a virtual proxy with hot-add can improve the backup performance.
If you see the storage being the bottleneck then splitting up your VMDK in smaller disks could help as Veeam will be able to read multiple disks in parallel.
Who is online
Users browsing this forum: Baidu [Spider] and 44 guests