Comprehensive data protection for all workloads
Post Reply
ureitz
Novice
Posts: 5
Liked: never
Joined: May 29, 2012 3:45 pm
Full Name: Ulf Reitz
Contact:

10G and nbd Backup

Post by ureitz »

Hi,

so far we had 2 ESXi (5.1) hosts with several 1gb nics. Storage for the VMs is a San (fibre). The Veeam Server was a vm. Backup target a nas.
There was / is one vswitch configured.

Now we equipped both hosts with a 10g nic. Also new ist a physical Backupserver with 10g nics and local storage for backup and a sas tape library.

We added the 10g nics to the managementnetwork in esxi. But backupjob performance is "only" 100mb/s. The networkflow from esxi to backupserver is still 1G.

Have you please an advice for me, how i have to configure the esxi hosts tu use 10g instead of 1g ? Do the VMs also need inside a 10G nic (vmnext3) ?

Many thanks and greetings
lando_uk
Veteran
Posts: 371
Liked: 32 times
Joined: Oct 17, 2013 10:02 am
Full Name: Mark
Location: UK
Contact:

Re: 10G and nbd Backup

Post by lando_uk »

You'll find you wont get more than 1G per VM, but if you run parallel jobs then it'll go over 1G - Try backing up multiple VMs at the same time and you'll see it go faster - but it wont be anywhere near 10G. Typically we get 300-500 MB/s during the nightly backups, you need to look at the esxi network charts and not the Veeam jobs themselves.
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: 10G and nbd Backup

Post by foggy »

Please check what transport mode is being used during backup in the job session log. Make sure direct SAN is configured.
Didi7
Veteran
Posts: 490
Liked: 59 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: 10G and nbd Backup

Post by Didi7 » 1 person likes this post

Hello ureitz,

first of all forget about the transport mode NBD, traffic is limited to 30-40% by VMware, as VMware reserves resources on vSwitches with vmKernel ports configured for management traffic.

You have 10GBit NICs in your vSphere ESXi servers, perfect. You have a physical backup server, perfect. You have local disks in your back server, also perfect. You have 10GBit NICs in your physical backup server, sounds even better.

Now you have 2 options to really get the backup speed, you are looking for.

The most preferred transport mode would be SAN transport, which was proposed by foggy. This would need another FC adapter in your physical backup server and the possibility to connect your physical backup server with your SAN storage directly or via a SAN switch. Direct attached FC depends on the amount of available FC connections on your SAN storage. In this case another FC adapter is necessary in your physical backup server and an FC cable. Should the amount of available FC connections on your SAN storage be exhausted, then you need the additional FC adapter in your physical backup server, as well as more FC cables and a SAN switch, which is not inexpensive.

Before investing new money, you could also increase speed by using the HOTADD transport mode. In this case, you should use the 1GBit NICs in your vSwitch, where your vmKernel port is configured for management traffic and your 10GBit NICs in a new vSwitch, where just your VMs reside or at least where no vmKernel ports are configured for management traffic. Then go ahead and build one or more Windows VM(s), preferably with the most recent Windows version, add a second LSI controller and equip this or these VM(s) with at least 4 CPU cores and 4GB RAM. Add those VM(s) to Veeam B&R, install the Veeam proxy transport agents and use them as VMware proxy server(s).

With the help of these Veeam proxy server(s), you can use HOTADD transport mode, which should be much faster than NBD. If your SAN storage is powerful enough, you should get theoretical transfer speed between 500-700MB/s.

Please use VMXNET3 as NIC type in your Veeam proxy server(s). NBD does not profit from VMs with VMXNET3 as NIC type. Please also consult Veeam documentation regarding limitations and recommendations using HOTADD transport mode.

Believe me, you won‘t regret the change from NBD to HOTADD, if you correctly configure your environment.

Or spend extra money and use SAN transport instead. Should you have NetApp SAN storage, the story might be different!

Tell us more about your SAN type model and if you upgraded to vSphere 6.x already?

Please let us know the results, so that other users might profit from this as well.

Regards,
Didi
Using the most recent Veeam B&R in many different environments now and counting!
ureitz
Novice
Posts: 5
Liked: never
Joined: May 29, 2012 3:45 pm
Full Name: Ulf Reitz
Contact:

Re: 10G and nbd Backup

Post by ureitz »

Thanks fpr replies.

DirectSan (Fujitsu Eternus) is no option for us in the moment. Last NBD Speed for Fullbackups was ~230 MB/S.
I will check speeds with hotadd.

Our main focus are restore times in an emergency. i will check this too.

Greetings
Didi7
Veteran
Posts: 490
Liked: 59 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: 10G and nbd Backup

Post by Didi7 »

Can you post the exact model of your Fujitsu Eternus. Thanks.

Btw, restore with HotAdd instead of SAN also has advantages.

Regards,
Didi7
Using the most recent Veeam B&R in many different environments now and counting!
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: 10G and nbd Backup

Post by Gostev »

ureitz wrote:DirectSan (Fujitsu Eternus) is no option for us in the moment.
Why not?
Didi7
Veteran
Posts: 490
Liked: 59 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: 10G and nbd Backup

Post by Didi7 »

Additional hardware expenses?
Using the most recent Veeam B&R in many different environments now and counting!
Post Reply

Who is online

Users browsing this forum: Bing [Bot], SimonS and 224 guests