Veeam 9 Backup Speeds on 10GB Network

Hyper-V specific discussions

Veeam 9 Backup Speeds on 10GB Network

Veeam Logoby paps40 » Tue Mar 29, 2016 5:14 pm

I have been stuck on this issue for a long time and wanted to reach out to the Veeam Forums for assistance.

My question is the following:
Is there anything hard coded in Veeam 9 that would throttle Direct SAN backup speeds under 1.5 GB? I can not get the direct san backups to transfer faster than 1.5 GBPS. Over the past 2 years we have slowly upgraded our network to 10 GB but my backup speeds have not increased with the new hardware. Why is that? Jumbo Frames are enabled end to end. I have opened tickets with EMC, Dell, Intel, Cisco, and Microsoft. I have followed all their recommendations but still can't get the speeds to increase. They have all mentioned to ask Veeam to take another look. I had a previous ticket in once I did the V9 upgrade but closed it once the V9 upgrade was completed. Veeam support mentioned everything looked good on the Veeam End.

Hardware
1. Physical Veeam Server
Dell R720 with Intel Xeon E5-2670, 48 GB RAM, 32 TB NLSAS Storage. ( Firmware Updated Jan 2016)
10GB NIC - IntelĀ® Ethernet Converged Network Adapter X520-DA2 ( Firmware Updated March 2016) Direct san runs on 10 GB Nics.

2. Switches
Cisco Nexus 9K switches (Firmware 1 rev behind)
mtu = 9216

3. EMC VNX2 SAN - 10 GB (Running Latest Firmware)
Increase read-ahead buffer - Veeam Default is 4MB (EMC said to leave it at defaults)

4. Veeam Settings
4a. Per VM Backups Enabled
4b. Block Size = LAN Target (512 KB Blocks)
4c. Multiple Upload Streams Per Job = 5
4d. Max Concurrent Tasks = 16
4e. Compression = Dedupe Friendly (we are using server 2012 deduplication to dedupe older backups that are saved on a different volume we call Archive Volume. Veeam backs up to a Landing Zone that is not deduped.

5. Intel 10GB Nic Tweaks
5a. netsh interface tcp set global autotuninglevel = disable (recommended at Veeam User Group Meeting)
5b. - Disabled Large Send Offload V2 for IPv6
5c. Intel Nic Properties - Change Max Number of RSS Queues from 8 to 16. The queues now match the RSS Processors
5d. TCP/IP Offloading Options < Turn off IPv6
5e. Turn off anything IPv6 related.
5f. Force NIC 10GB Speed
5g. Enable Jumbo Frames 9014 bytes
paps40
Influencer
 
Posts: 23
Liked: 10 times
Joined: Mon Dec 12, 2011 4:10 pm
Full Name: Peter Pappas

Re: Veeam 9 Backup Speeds on 10GB Network

Veeam Logoby Gostev » Tue Mar 29, 2016 5:23 pm

What is the bottleneck statistics in Veeam job?
Gostev
Veeam Software
 
Posts: 21390
Liked: 2349 times
Joined: Sun Jan 01, 2006 1:01 am
Location: Baar, Switzerland

Re: Veeam 9 Backup Speeds on 10GB Network

Veeam Logoby paps40 » Tue Mar 29, 2016 5:28 pm

Bottleneck = Source
Source - 99%
Proxy - 37%
Network - 0%
Target - 0%

EMC Tech Support
After reviewing your SP collects the VNX2 is set to handle the Jumbo frames. The physical ports connected to the VNX are set at 9000 MTU. According to your configurations we see no cause of low bandwidth. As added to the previous email you are able to see the physical setting and they are set to best practice for the speeds of I/O you are looking to get.
paps40
Influencer
 
Posts: 23
Liked: 10 times
Joined: Mon Dec 12, 2011 4:10 pm
Full Name: Peter Pappas

Re: Veeam 9 Backup Speeds on 10GB Network

Veeam Logoby Gostev » Tue Mar 29, 2016 6:16 pm

What total job throughput do you get doing full backup of a single VM vs. 2 VMs concurrently?
Gostev
Veeam Software
 
Posts: 21390
Liked: 2349 times
Joined: Sun Jan 01, 2006 1:01 am
Location: Baar, Switzerland

Re: Veeam 9 Backup Speeds on 10GB Network

Veeam Logoby paps40 » Tue Mar 29, 2016 7:26 pm

Active Full Backups From Last Weekend

1 VM - Server 2012 / SQL 2012 = 131 MB/s
13 VMs - Server 2012 = 152 MB/s
paps40
Influencer
 
Posts: 23
Liked: 10 times
Joined: Mon Dec 12, 2011 4:10 pm
Full Name: Peter Pappas

Re: Veeam 9 Backup Speeds on 10GB Network

Veeam Logoby Gostev » Tue Mar 29, 2016 8:57 pm

Yeah, something is clearly not in order here. From the bottleneck stats indeed it does not look like Veeam is the issue here... our data mover spends all the time just waiting for the storage to provide the requested data blocks, while the rest of the processing chain is completely idle.

paps40 wrote:Veeam support mentioned everything looked good on the Veeam End.

Did they perform some basic I/O performance tests using other tools?
Was the speed in these tests roughly the same as during backup?
Gostev
Veeam Software
 
Posts: 21390
Liked: 2349 times
Joined: Sun Jan 01, 2006 1:01 am
Location: Baar, Switzerland


Return to Microsoft Hyper-V



Who is online

Users browsing this forum: No registered users and 5 guests