-
- Veteran
- Posts: 366
- Liked: 24 times
- Joined: May 01, 2013 9:54 pm
- Full Name: Julien
- Contact:
V11 10Gbit/s switches
Hi guys,
we are testing the VEEAM CDP/ backup behind a 10GB switch.
the speed i am getting is 125MB/S,
with 1GB Switch i get between 80MB and 100MB.
this just for information and not a technicall tickets.
who is using V11 behind a 10GB switch? what is your CDP speed / backup Speed?
Thank you
we are testing the VEEAM CDP/ backup behind a 10GB switch.
the speed i am getting is 125MB/S,
with 1GB Switch i get between 80MB and 100MB.
this just for information and not a technicall tickets.
who is using V11 behind a 10GB switch? what is your CDP speed / backup Speed?
Thank you
-
- Product Manager
- Posts: 14839
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: V11 10BG switches
Hello,
from a software perspective, 10Gbit/s is easy. With all-flash storage as source, the network becomes more and more often the bottleneck.
What I can say about speed improvements: MTU 9000 is recommended for CDP to get best speed.
Best regards,
Hannes
from a software perspective, 10Gbit/s is easy. With all-flash storage as source, the network becomes more and more often the bottleneck.
What I can say about speed improvements: MTU 9000 is recommended for CDP to get best speed.
Best regards,
Hannes
-
- Veteran
- Posts: 366
- Liked: 24 times
- Joined: May 01, 2013 9:54 pm
- Full Name: Julien
- Contact:
Re: V11 10Gbit/s switches
Thank you for your answer.
i just noticed the firewall up link was still connected to the 1GB physical switch, maybe this why ?
do you suggest to create a seperate VLAN for CDP with 9000 MTU ?
can i add this somewhere on the veeam ?
i just noticed the firewall up link was still connected to the 1GB physical switch, maybe this why ?
do you suggest to create a seperate VLAN for CDP with 9000 MTU ?
can i add this somewhere on the veeam ?
-
- Product Manager
- Posts: 14839
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: V11 10Gbit/s switches
if the firewall is somehow in between the Veeam components, then this sounds like a very likely reasoni just noticed the firewall up link was still connected to the 1GB physical switch, maybe this why ?
only if all other bottlenecks (the firewall) were ruled out before _and_ you think it's still to slow. Reading the question again, I would say "stay with the defaults". MTU changes need to be done carefully to avoid different MTU sizes in one network which leads to fragmented frames. How to configure MTU depends on the switch / vendor / model. MTU size can be checked for example with ping and "don't fragment" set.do you suggest to create a seperate VLAN for CDP with 9000 MTU ?
-
- Veteran
- Posts: 366
- Liked: 24 times
- Joined: May 01, 2013 9:54 pm
- Full Name: Julien
- Contact:
Re: V11 10Gbit/s switches
We have today a meeting a questions has been raised.
can we create a vMkernel for CDP on the vCenter ?
can we create a vMkernel for CDP on the vCenter ?
-
- Product Manager
- Posts: 14839
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: V11 10Gbit/s switches
yes, a dedicated VMkernel port with dedicated physical NIC is a recommendation for high bandwidth deployments. But I suggest to check the overall design first.
-
- Veeam Legend
- Posts: 945
- Liked: 221 times
- Joined: Jul 19, 2016 8:39 am
- Full Name: Michael
- Location: Rheintal, Austria
- Contact:
Re: V11 10Gbit/s switches
we've got 10 Gbit/s between our servers, MTU is set to 9000 and it's no problem to for instance replicate with 1 GB (gigabyte) per second.
-
- Influencer
- Posts: 14
- Liked: 3 times
- Joined: Nov 17, 2016 3:20 pm
- Full Name: Edward B
- Contact:
Re: V11 10Gbit/s switches
My speed did not improve at all when adding 10Gb cards to hosts and switch direct SFP+ copper cable connections. VMware configuration is the 10Gb as primary with 1Gb as failover. Very small deployment. Two hosts, less than 10 VMs. Physical distance less than 10ft or 3m between hosts and switches.
I have not yet tested with direct file copy to see if there is anything wrong with the network configuration or if it is something with the Veeam server configuration.
I have not yet tested with direct file copy to see if there is anything wrong with the network configuration or if it is something with the Veeam server configuration.
-
- Veeam Legend
- Posts: 945
- Liked: 221 times
- Joined: Jul 19, 2016 8:39 am
- Full Name: Michael
- Location: Rheintal, Austria
- Contact:
Re: V11 10Gbit/s switches
in such a case, using tools like lanbench would help you to find out if a direct vm to vm connection (across the two hosts) delivers the expected bandwith, or at least an approximation. It might be that for instance you're disks were the bottleneck and therefor less data was transferred.
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Aug 05, 2015 4:56 pm
- Contact:
Re: V11 10Gbit/s switches
Our Veeam (v10) server with local ReFS storage on SATA discs is linked over iSCSI (which uses a 10 Gbit NIC) to a 10 Gbit switch, who is linked to our storage with fiber connections containing our SAS and SSD array. We use a dedicated vlan for this traffic and MTU is set to 9000-ish (because our switch required a slightly different value).
Also a small deployment with 2 hosts connected to the storage using 10 Gbit copper.
My speed tests between the BU server and the storage on a VM or the VM on the storage itself reads:
- Diskspd mentioned 150 MB/s to the SSD array and 100 MB/s to the SAS array.
- Using a tool provided by Veeam support (vixdisklib-rs.exe) I got these readings:
Average speed of 390 MB/S on SSD and an average of 57 MB/s to SAS.
- Veeam B&R mentions max 290 MB/s read and 140 MB/s transfer speeds on SSD and 40 MB/s read and 35 MB/s transfer speeds on SAS.
Also a small deployment with 2 hosts connected to the storage using 10 Gbit copper.
My speed tests between the BU server and the storage on a VM or the VM on the storage itself reads:
- Diskspd mentioned 150 MB/s to the SSD array and 100 MB/s to the SAS array.
- Using a tool provided by Veeam support (vixdisklib-rs.exe) I got these readings:
Average speed of 390 MB/S on SSD and an average of 57 MB/s to SAS.
- Veeam B&R mentions max 290 MB/s read and 140 MB/s transfer speeds on SSD and 40 MB/s read and 35 MB/s transfer speeds on SAS.
-
- Influencer
- Posts: 14
- Liked: 3 times
- Joined: Nov 17, 2016 3:20 pm
- Full Name: Edward B
- Contact:
Re: V11 10Gbit/s switches
Getting a bit better performance now that I realize I had to remove the VMware E1000E virtual adapters and use the VMXNET 3 virtual adapters to get 10Gb on each Windows server. Tested the copy of files between Windows servers and speed improved from 100Mb range to 500Mb range for the ones plugged in to the 10Gb switch ports. Still need to set jumbo frames but waiting for a new switch install prior to doing that. Thanks for the feedback, it has been helpful in getting closer to the goal.
Who is online
Users browsing this forum: Baidu [Spider] and 260 guests