10G and nbd Backup

Availability for the Always-On Enterprise

10G and nbd Backup

Veeam Logoby ureitz » Thu Dec 07, 2017 10:16 pm

Hi,

so far we had 2 ESXi (5.1) hosts with several 1gb nics. Storage for the VMs is a San (fibre). The Veeam Server was a vm. Backup target a nas.
There was / is one vswitch configured.

Now we equipped both hosts with a 10g nic. Also new ist a physical Backupserver with 10g nics and local storage for backup and a sas tape library.

We added the 10g nics to the managementnetwork in esxi. But backupjob performance is "only" 100mb/s. The networkflow from esxi to backupserver is still 1G.

Have you please an advice for me, how i have to configure the esxi hosts tu use 10g instead of 1g ? Do the VMs also need inside a 10G nic (vmnext3) ?

Many thanks and greetings
ureitz
Novice
 
Posts: 5
Liked: never
Joined: Tue May 29, 2012 3:45 pm
Full Name: Ulf Reitz

Re: 10G and nbd Backup

Veeam Logoby lando_uk » Fri Dec 08, 2017 10:33 am

You'll find you wont get more than 1G per VM, but if you run parallel jobs then it'll go over 1G - Try backing up multiple VMs at the same time and you'll see it go faster - but it wont be anywhere near 10G. Typically we get 300-500 MB/s during the nightly backups, you need to look at the esxi network charts and not the Veeam jobs themselves.
lando_uk
Expert
 
Posts: 254
Liked: 18 times
Joined: Thu Oct 17, 2013 10:02 am
Location: UK
Full Name: Mark

Re: 10G and nbd Backup

Veeam Logoby foggy » Fri Dec 08, 2017 12:07 pm

Please check what transport mode is being used during backup in the job session log. Make sure direct SAN is configured.
foggy
Veeam Software
 
Posts: 15394
Liked: 1142 times
Joined: Mon Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson

Re: 10G and nbd Backup

Veeam Logoby Didi7 » Sat Dec 09, 2017 11:58 pm 1 person likes this post

Hello ureitz,

first of all forget about the transport mode NBD, traffic is limited to 30-40% by VMware, as VMware reserves resources on vSwitches with vmKernel ports configured for management traffic.

You have 10GBit NICs in your vSphere ESXi servers, perfect. You have a physical backup server, perfect. You have local disks in your back server, also perfect. You have 10GBit NICs in your physical backup server, sounds even better.

Now you have 2 options to really get the backup speed, you are looking for.

The most preferred transport mode would be SAN transport, which was proposed by foggy. This would need another FC adapter in your physical backup server and the possibility to connect your physical backup server with your SAN storage directly or via a SAN switch. Direct attached FC depends on the amount of available FC connections on your SAN storage. In this case another FC adapter is necessary in your physical backup server and an FC cable. Should the amount of available FC connections on your SAN storage be exhausted, then you need the additional FC adapter in your physical backup server, as well as more FC cables and a SAN switch, which is not inexpensive.

Before investing new money, you could also increase speed by using the HOTADD transport mode. In this case, you should use the 1GBit NICs in your vSwitch, where your vmKernel port is configured for management traffic and your 10GBit NICs in a new vSwitch, where just your VMs reside or at least where no vmKernel ports are configured for management traffic. Then go ahead and build one or more Windows VM(s), preferably with the most recent Windows version, add a second LSI controller and equip this or these VM(s) with at least 4 CPU cores and 4GB RAM. Add those VM(s) to Veeam B&R, install the Veeam proxy transport agents and use them as VMware proxy server(s).

With the help of these Veeam proxy server(s), you can use HOTADD transport mode, which should be much faster than NBD. If your SAN storage is powerful enough, you should get theoretical transfer speed between 500-700MB/s.

Please use VMXNET3 as NIC type in your Veeam proxy server(s). NBD does not profit from VMs with VMXNET3 as NIC type. Please also consult Veeam documentation regarding limitations and recommendations using HOTADD transport mode.

Believe me, you won‘t regret the change from NBD to HOTADD, if you correctly configure your environment.

Or spend extra money and use SAN transport instead. Should you have NetApp SAN storage, the story might be different!

Tell us more about your SAN type model and if you upgraded to vSphere 6.x already?

Please let us know the results, so that other users might profit from this as well.

Regards,
Didi
Using Veeam Backup & Replication 9.5 Update 2 on every backup server here!
Didi7
Expert
 
Posts: 247
Liked: 17 times
Joined: Fri Oct 17, 2014 8:09 am

Re: 10G and nbd Backup

Veeam Logoby ureitz » Sun Dec 10, 2017 8:06 pm

Thanks fpr replies.

DirectSan (Fujitsu Eternus) is no option for us in the moment. Last NBD Speed for Fullbackups was ~230 MB/S.
I will check speeds with hotadd.

Our main focus are restore times in an emergency. i will check this too.

Greetings
ureitz
Novice
 
Posts: 5
Liked: never
Joined: Tue May 29, 2012 3:45 pm
Full Name: Ulf Reitz

Re: 10G and nbd Backup

Veeam Logoby Didi7 » Sun Dec 10, 2017 9:59 pm

Can you post the exact model of your Fujitsu Eternus. Thanks.

Btw, restore with HotAdd instead of SAN also has advantages.

Regards,
Didi7
Using Veeam Backup & Replication 9.5 Update 2 on every backup server here!
Didi7
Expert
 
Posts: 247
Liked: 17 times
Joined: Fri Oct 17, 2014 8:09 am

Re: 10G and nbd Backup

Veeam Logoby Gostev » Mon Dec 11, 2017 4:26 pm

ureitz wrote:DirectSan (Fujitsu Eternus) is no option for us in the moment.

Why not?
Gostev
Veeam Software
 
Posts: 21648
Liked: 2433 times
Joined: Sun Jan 01, 2006 1:01 am
Location: Baar, Switzerland

Re: 10G and nbd Backup

Veeam Logoby Didi7 » Mon Dec 11, 2017 4:31 pm

Additional hardware expenses?
Using Veeam Backup & Replication 9.5 Update 2 on every backup server here!
Didi7
Expert
 
Posts: 247
Liked: 17 times
Joined: Fri Oct 17, 2014 8:09 am


Return to Veeam Backup & Replication



Who is online

Users browsing this forum: Bing [Bot], NightBird, Yahoo [Bot] and 1 guest