-
- Veteran
- Posts: 266
- Liked: 30 times
- Joined: Apr 26, 2013 4:53 pm
- Full Name: Dan Swartzendruber
- Contact:
Proper setup for NBD backups?
First off, I don't have a problem - backups are working fine. This is more in the way of a sanity check. So, two vsphere hosts. Management network is the LAN (1gb through a switch.) Storage is accessed via 50mb Mellanox cards (back to back, no switch.) I've noticed that when direct nfs isn't working (for whatever reason), NBM mode seems to throttle at about 100MB/sec. Switch GUI indicates the expected amount of 1gb traffic between the two hosts. Reading the description of NBD mode, it sounds like I need to enable the management network switch for the 50gb vswitch, but also need to change how vcenter is finding the hosts? e.g. remove and re-add them using the IP addresses of the vsphere hosts? If so, this may be more hassle than it is worth...
-
- Veteran
- Posts: 266
- Liked: 30 times
- Joined: Apr 26, 2013 4:53 pm
- Full Name: Dan Swartzendruber
- Contact:
Re: Proper setup for NBD backups?
At the moment, I am doing this: the virtual storage appliance is running on vsphere host 10.0.0.4, using a passed-through 12gb HBA connected to the JBOD. I have migrated the veeam windows 2016 appliance to 10.0.0.4, on the theory that it can talk to the hypervisor using the vmxnet3 adapter, and get well over 1gb/sec. This does in fact seem to work. To provent the appliance from migrating due to DRS, I added a 1GB vdisk on the local datastore
I'm not sure I understood the explanation as to how NBD works. I *think* I saw a veeam article that said that veeam B&R queries vcenter for the IP address of the host(s). In my case, that would be 10.0.0.4 and 10.0.0.5 (the VCSA is 10.0.0.16). So, if veeam B&R has to use the maintenance network (10.0.0.0/24), it will be via the 1gb ethernet network (unless the traffic never leaves the host, as described earlier in this post.) Do I understand this correctly? My current switch (an edgeswitch 24) has no SFP+ ports, so I can't use my 2 intel 10gb cards
If I want to put the 10.0.0.4 and 10.0.0.5 on a 10gb ethernet, and bridge that to the maintenance 1gb network, I'd need a switch with 2 SFP+ ports, or a separate switch with 2+ SFP+ ports and at least one rj-45 port to connect the two switches. Any thoughts/tips welcome 



-
- Veteran
- Posts: 266
- Liked: 30 times
- Joined: Apr 26, 2013 4:53 pm
- Full Name: Dan Swartzendruber
- Contact:
Re: Proper setup for NBD backups?
*crickets*
-
- VP, Product Management
- Posts: 7204
- Liked: 1547 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: Proper setup for NBD backups?
Hi, these 2 articles address your questions:
https://www.veeambp.com/dns_resolution => How to bring Veeam to the point to use the non default (in your case faster) VMKernel port.
https://www.veeambp.com/proxy_servers_i ... twork_mode => Tuning for the NBD and explanations.
https://www.veeambp.com/dns_resolution => How to bring Veeam to the point to use the non default (in your case faster) VMKernel port.
https://www.veeambp.com/proxy_servers_i ... twork_mode => Tuning for the NBD and explanations.
-
- Veteran
- Posts: 266
- Liked: 30 times
- Joined: Apr 26, 2013 4:53 pm
- Full Name: Dan Swartzendruber
- Contact:
Re: Proper setup for NBD backups?
I saw that, thanks. My concern was that this would require changing things in my vcenter+2esxi configuration. I ended up moving my setup to a 24-port gb switch with 2 SFP+ ports. I put the 10gb port on each host into vswitch0 as active and demoted the gb port in each vswitch0 to standby, and am now getting much better performance.
Who is online
Users browsing this forum: Google [Bot] and 75 guests