first of all forget about the transport mode NBD, traffic is limited to 30-40% by VMware, as VMware reserves resources on vSwitches with vmKernel ports configured for management traffic.
You have 10GBit NICs in your vSphere ESXi servers, perfect. You have a physical backup server, perfect. You have local disks in your back server, also perfect. You have 10GBit NICs in your physical backup server, sounds even better.
Now you have 2 options to really get the backup speed, you are looking for.
The most preferred transport mode would be SAN transport, which was proposed by foggy. This would need another FC adapter in your physical backup server and the possibility to connect your physical backup server with your SAN storage directly or via a SAN switch. Direct attached FC depends on the amount of available FC connections on your SAN storage. In this case another FC adapter is necessary in your physical backup server and an FC cable. Should the amount of available FC connections on your SAN storage be exhausted, then you need the additional FC adapter in your physical backup server, as well as more FC cables and a SAN switch, which is not inexpensive.
Before investing new money, you could also increase speed by using the HOTADD transport mode. In this case, you should use the 1GBit NICs in your vSwitch, where your vmKernel port is configured for management traffic and your 10GBit NICs in a new vSwitch, where just your VMs reside or at least where no vmKernel ports are configured for management traffic. Then go ahead and build one or more Windows VM(s), preferably with the most recent Windows version, add a second LSI controller and equip this or these VM(s) with at least 4 CPU cores and 4GB RAM. Add those VM(s) to Veeam B&R, install the Veeam proxy transport agents and use them as VMware proxy server(s).
With the help of these Veeam proxy server(s), you can use HOTADD transport mode, which should be much faster than NBD. If your SAN storage is powerful enough, you should get theoretical transfer speed between 500-700MB/s.
Please use VMXNET3 as NIC type in your Veeam proxy server(s). NBD does not profit from VMs with VMXNET3 as NIC type. Please also consult Veeam documentation regarding limitations and recommendations using HOTADD transport mode.
Believe me, you won‘t regret the change from NBD to HOTADD, if you correctly configure your environment.
Or spend extra money and use SAN transport instead. Should you have NetApp SAN storage, the story might be different!
Tell us more about your SAN type model and if you upgraded to vSphere 6.x already?
Please let us know the results, so that other users might profit from this as well.
Using Veeam Backup & Replication 9.5 Update 2 on every backup server here!