-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: Apr 24, 2020 6:14 am
- Contact:
1Gbit/s per VMDK Limit
Hello,
actually we are implementing our new backup repository and doing some backup speed tests.
We got a max inline speed during an active full backup of about 1.4 GigaByte/s, which is very great for us.
However this speed is only available if processing a backup with a lot of VMs/VMDKs the same time.
For every VMDK there is a speed limit nearly to 110 MByte/s > 1Gbit/s ???
So a VM with only 1 virtual big Disk of 5TB will take several hours.
This "speed limit" seems to be only for VMs with disk on the iSCSI SAN.
The same VM copied on a local Disks on the same host is 6 time faster.
Is anybody aware of some restrictions for iSCSI?
SAN is a AllFlashArray connected with 8x 10GB/s iSCSI and every ESXi Host is connected by 2x dedicated 10GB/s NICs.
We also only recognized this limit during backup. Inside the VMs everything is pretty fast.
Thanks a lot
actually we are implementing our new backup repository and doing some backup speed tests.
We got a max inline speed during an active full backup of about 1.4 GigaByte/s, which is very great for us.
However this speed is only available if processing a backup with a lot of VMs/VMDKs the same time.
For every VMDK there is a speed limit nearly to 110 MByte/s > 1Gbit/s ???
So a VM with only 1 virtual big Disk of 5TB will take several hours.
This "speed limit" seems to be only for VMs with disk on the iSCSI SAN.
The same VM copied on a local Disks on the same host is 6 time faster.
Is anybody aware of some restrictions for iSCSI?
SAN is a AllFlashArray connected with 8x 10GB/s iSCSI and every ESXi Host is connected by 2x dedicated 10GB/s NICs.
We also only recognized this limit during backup. Inside the VMs everything is pretty fast.
Thanks a lot
-
- Veteran
- Posts: 298
- Liked: 85 times
- Joined: Feb 16, 2017 8:05 pm
- Contact:
Re: 1Gbit/s per VMDK Limit
1Gb = 125MB
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: Apr 24, 2020 6:14 am
- Contact:
Re: 1Gbit/s per VMDK Limit
Hello nitramd,
here's a nice article why 1Gb != 125MB:
https://kb.netapp.com/app/answers/answe ... erface%3F-
So, our 110 MByte/s "could" still be a limitation of 1Gbit/s. But we only have 10GBit/s connections.
Any other suggestions what will limit the speed?
Regards
here's a nice article why 1Gb != 125MB:
https://kb.netapp.com/app/answers/answe ... erface%3F-
So, our 110 MByte/s "could" still be a limitation of 1Gbit/s. But we only have 10GBit/s connections.
Any other suggestions what will limit the speed?
Regards
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: 1Gbit/s per VMDK Limit
What are the bottleneck stats for the jobs and what transport mode (network, hotadd, direct san) do they use in both cases - for iSCSI SAN and local storage?
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: Apr 24, 2020 6:14 am
- Contact:
Re: 1Gbit/s per VMDK Limit
Hello,
sorry for the delay...
For both jobs the bottelneck is identified as source > load: source 99%.
Backserver is a Hardwaresystem with 2x10GBE Teaming. Backup method is NBD.
I still think this limitation is hard set in some configuration...
Regards
sorry for the delay...
For both jobs the bottelneck is identified as source > load: source 99%.
Backserver is a Hardwaresystem with 2x10GBE Teaming. Backup method is NBD.
I still think this limitation is hard set in some configuration...
Regards
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: 1Gbit/s per VMDK Limit
It might be ESXi management interface throttling, try some other transport method to check.
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: 1Gbit/s per VMDK Limit
Hello,
@ElmerAcme
One more idea is to contact our support team, our engineers can perform additional testing, for example to measure NBD speed by running benchmark using VDDK sample tool which performs read operations in similar way as Veeam does.
Basically, I agree with Foggy: it makes sense to test another transport mode. Any chance to try Hot-Add?
Thanks!
@ElmerAcme
One more idea is to contact our support team, our engineers can perform additional testing, for example to measure NBD speed by running benchmark using VDDK sample tool which performs read operations in similar way as Veeam does.
Basically, I agree with Foggy: it makes sense to test another transport mode. Any chance to try Hot-Add?
Thanks!
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: Apr 24, 2020 6:14 am
- Contact:
Re: 1Gbit/s per VMDK Limit
Hello,
we already created a case for this, and support also pointed to use Virtual Proxies and hotAdd.
But actually no chance for it.
All hosts inside VMWare cluster have exactly this bottel neck.
The hardware varies between 3 generations. 2 GHZs CPUs to 3,8 GHz CPUs. Also related RAM speeds.
So, if this would be a real performance issue, I would expect some different values.
Our goal for end of year is a backup from SAN-Snapshot.
Did anyone also use a hardware Veeam server and an iSCSI storage?
Hope that we are not the only one with this issue.
Regards
we already created a case for this, and support also pointed to use Virtual Proxies and hotAdd.
But actually no chance for it.
All hosts inside VMWare cluster have exactly this bottel neck.
The hardware varies between 3 generations. 2 GHZs CPUs to 3,8 GHz CPUs. Also related RAM speeds.
So, if this would be a real performance issue, I would expect some different values.
Our goal for end of year is a backup from SAN-Snapshot.
Did anyone also use a hardware Veeam server and an iSCSI storage?
Hope that we are not the only one with this issue.
Regards
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: 1Gbit/s per VMDK Limit
It doesn't depend on the hosts hardware, just the network mode limitations incurred by VMware. Could you please elaborate on why there's no chance for hotadd? As an alternative, since you have a physical Veeam B&R, think of trying direct SAN mode, will give you better performance rates.
Here's also another existing thread for some hints.
Here's also another existing thread for some hints.
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: Apr 24, 2020 6:14 am
- Contact:
Re: 1Gbit/s per VMDK Limit
Hello,
hope I got the right information...
As I know, for HotAdd, virtual Proxies are needed on every Host. We are not able to guarantee that those Proxies remain on the Host as we don't have any related vShpere license for this. Just for testing it would work. But we are told that HotAdd produces a lot of overhead for attaching and detaching the disks. And this will take longer for our about 300 VMs.
Is there a reason for this limitation on VMWare side? As the same VM with 5 Disk additional will run 5 times faster. As the limitation is VMDK based. Not VM based.
The 114MB/s is near a theoretical limit for 1GB/s TCP/IP networking without using jumbo frames. Do you think this is just a coincidence?
My hope is to find this limitation setting and have a fast fix until we move forward to backup from SAN method.
Thanks
hope I got the right information...
As I know, for HotAdd, virtual Proxies are needed on every Host. We are not able to guarantee that those Proxies remain on the Host as we don't have any related vShpere license for this. Just for testing it would work. But we are told that HotAdd produces a lot of overhead for attaching and detaching the disks. And this will take longer for our about 300 VMs.
Is there a reason for this limitation on VMWare side? As the same VM with 5 Disk additional will run 5 times faster. As the limitation is VMDK based. Not VM based.
The 114MB/s is near a theoretical limit for 1GB/s TCP/IP networking without using jumbo frames. Do you think this is just a coincidence?
My hope is to find this limitation setting and have a fast fix until we move forward to backup from SAN method.
Thanks
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: 1Gbit/s per VMDK Limit
Hello,
I would recommend to test HotAdd anyway just to understand how much will you gain from performance perspective and what is a real overhead.
For example, you may opt for more "flexible" way and to backup only the specific VMs in HotAdd while other VMs can be processed in NBD.
Regarding slow read in NBD mode: I believe that a VM with 5 disks can be processed 5x times faster because of parallel processing whereas a VM with the single disk is processed in one thread.
I would recommend to ask our support team to run a read benchmark test in NBD mode to define a maximal achievable read speed in one thread using NBD mode in your specific environment.
Thanks!
I would recommend to test HotAdd anyway just to understand how much will you gain from performance perspective and what is a real overhead.
For example, you may opt for more "flexible" way and to backup only the specific VMs in HotAdd while other VMs can be processed in NBD.
Regarding slow read in NBD mode: I believe that a VM with 5 disks can be processed 5x times faster because of parallel processing whereas a VM with the single disk is processed in one thread.
I would recommend to ask our support team to run a read benchmark test in NBD mode to define a maximal achievable read speed in one thread using NBD mode in your specific environment.
Thanks!
-
- Service Provider
- Posts: 14
- Liked: 10 times
- Joined: Oct 19, 2018 7:02 am
- Full Name: Michael Engl
- Location: Germany
- Contact:
Re: 1Gbit/s per VMDK Limit
Hello,
there is no need to have one HotAdd Proxy per Host.
I would definetly recommend to give it a quick try. We use this without any problems for all customers if direct SAN access is not possible. NBD is only good when you don't care about performance.
It's only a few minutes set this up. Then you can see whether your problem is related to the vmkernel limits/NBD or not.
Depending on your backup size per VM and the duration to copy that a Hot-Add Proxy with 8 cores (and 8 concurrent tasks) can easily saturate a 10G NIC. For example when two disks are in attach/detach process you can still read 6 other disks in parallel.
If your proxy is bottleneck in that case scale out wiht a second one.
At least make sure you are running a up to date Version of Veeam and ESXi. There were performance issues in the past using NBD over SSL (I think this was ESXi 6.5 U2 or early 6.7 versions)
there is no need to have one HotAdd Proxy per Host.
I would definetly recommend to give it a quick try. We use this without any problems for all customers if direct SAN access is not possible. NBD is only good when you don't care about performance.
It's only a few minutes set this up. Then you can see whether your problem is related to the vmkernel limits/NBD or not.
Depending on your backup size per VM and the duration to copy that a Hot-Add Proxy with 8 cores (and 8 concurrent tasks) can easily saturate a 10G NIC. For example when two disks are in attach/detach process you can still read 6 other disks in parallel.
If your proxy is bottleneck in that case scale out wiht a second one.
At least make sure you are running a up to date Version of Veeam and ESXi. There were performance issues in the past using NBD over SSL (I think this was ESXi 6.5 U2 or early 6.7 versions)
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: Apr 24, 2020 6:14 am
- Contact:
Re: 1Gbit/s per VMDK Limit
Hello,
just done a quick test this morning. First time for me using this backup method...
With virtual proxy and HotAdd I got a processing rate of 800 MByte/s overall The last running single disk was 700MByte/s.
I think this is as fast as you expected.
So, for me, there is no hardware bottle neck as the same components on ESXi Host are used.
All Host are running ESXi 6.7 with last available update. Also Veeam 9.5 is patched.
Had a longer support session with VMWare support yesterday. After some testing they also point to some internal ESXi restrictions for my backup way. They will escalate to engineering and come back to me.
REgards
just done a quick test this morning. First time for me using this backup method...
With virtual proxy and HotAdd I got a processing rate of 800 MByte/s overall The last running single disk was 700MByte/s.
I think this is as fast as you expected.
So, for me, there is no hardware bottle neck as the same components on ESXi Host are used.
All Host are running ESXi 6.7 with last available update. Also Veeam 9.5 is patched.
Had a longer support session with VMWare support yesterday. After some testing they also point to some internal ESXi restrictions for my backup way. They will escalate to engineering and come back to me.
REgards
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: 1Gbit/s per VMDK Limit
Thanks for the update. I'd appreciate if you also share the case results with us.
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: 1Gbit/s per VMDK Limit
Hello,
Keep in mind that we should not compare NBD read with just a simple data transfer through the network link.
Veeam leverages VADP to access a virtual disk and APIs round trips related to a transport mode logic can have an additional overhead.
I would not expect a serious performance increase in NBD but suggest to think about permanent implementation of HotAdd in your environment.
By the way, HotAdd supports Advanced Data Fetcher which is based on asynchronous read, this technology ussually gives a significant grow of read speed.
Nevertheless, let's wait for troubleshooting results from VMware support team.
Thanks!
Keep in mind that we should not compare NBD read with just a simple data transfer through the network link.
Veeam leverages VADP to access a virtual disk and APIs round trips related to a transport mode logic can have an additional overhead.
I would not expect a serious performance increase in NBD but suggest to think about permanent implementation of HotAdd in your environment.
By the way, HotAdd supports Advanced Data Fetcher which is based on asynchronous read, this technology ussually gives a significant grow of read speed.
Nevertheless, let's wait for troubleshooting results from VMware support team.
Thanks!
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: Apr 24, 2020 6:14 am
- Contact:
Re: 1Gbit/s per VMDK Limit
Hello,
until now, I got no new information from VMWare support...
@PetrM
The overall speed with NBD is absolutely fine for us. It's just the limit by VMDK and only if resides on iSCSI storage
Same VM with disks on local storage is also over 600MByte/s with NBD.
Regards
until now, I got no new information from VMWare support...
@PetrM
The overall speed with NBD is absolutely fine for us. It's just the limit by VMDK and only if resides on iSCSI storage
Same VM with disks on local storage is also over 600MByte/s with NBD.
Regards
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: Apr 24, 2020 6:14 am
- Contact:
Re: 1Gbit/s per VMDK Limit
Hello,
we received an answer from VMWare engeneering yesterday:
"This is expected behavior. According to the internal benchmarks, NFC can only use up to ~ 1.3 Gbps of network bandwidth, which is in line with 125 MBps number quoted in this SR. Increasing the maximum network bandwidth usable by NFC would require a significant rearchitecture of NFC.
As you may know vADP has different modes of transport out of which NBD is one and it uses NFC's Async IO mode. With the current architecture NFC's ASync IO mode is not capable of saturating high speed network links such 10 GbE with a single stream.
With that said, NFC re-architecture is in progress."
We try to move to "backup from SAN" during the next weeks...
Regards
we received an answer from VMWare engeneering yesterday:
"This is expected behavior. According to the internal benchmarks, NFC can only use up to ~ 1.3 Gbps of network bandwidth, which is in line with 125 MBps number quoted in this SR. Increasing the maximum network bandwidth usable by NFC would require a significant rearchitecture of NFC.
As you may know vADP has different modes of transport out of which NBD is one and it uses NFC's Async IO mode. With the current architecture NFC's ASync IO mode is not capable of saturating high speed network links such 10 GbE with a single stream.
With that said, NFC re-architecture is in progress."
We try to move to "backup from SAN" during the next weeks...
Regards
-
- Veteran
- Posts: 643
- Liked: 312 times
- Joined: Aug 04, 2019 2:57 pm
- Full Name: Harvey
- Contact:
Re: 1Gbit/s per VMDK Limit
Hey ElmerAcme,
Very interesting! Thank you very much for sharing this!
Is this "internal only" limitation from VMware or do they have a document on this? I'm always moving clients towards hotadd and SAN mode, but it's a challenge sometimes with my more...opinionated clients, and I'd love to have a big stick to smack'em with from Vmware.
Very interesting! Thank you very much for sharing this!
Is this "internal only" limitation from VMware or do they have a document on this? I'm always moving clients towards hotadd and SAN mode, but it's a challenge sometimes with my more...opinionated clients, and I'd love to have a big stick to smack'em with from Vmware.
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: 1Gbit/s per VMDK Limit
Actually, it's a good idea to clarify if this limitation is already documented somewhere or not and maybe ask VMware engineering to publish the KB article.
Thanks!
Thanks!
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: Apr 24, 2020 6:14 am
- Contact:
Re: 1Gbit/s per VMDK Limit
Hello,
the VMWare case is already closed. I try to contact them again...
Regards Andreas
the VMWare case is already closed. I try to contact them again...
Regards Andreas
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: 1Gbit/s per VMDK Limit
You may also contact our support team and ask our engineers to assist you in communication with VMware engineering, an outcome of this research might be useful for us and for our knowledge base as well.
Thanks!
Thanks!
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: Apr 24, 2020 6:14 am
- Contact:
Re: 1Gbit/s per VMDK Limit
Hello,
until know I didn't get a response of VMWare support regarding more information.
But meanwhile we have reconfigured our config to use SAN access. For a single disk, the stream is 7 times faster now!!!
Regards
until know I didn't get a response of VMWare support regarding more information.
But meanwhile we have reconfigured our config to use SAN access. For a single disk, the stream is 7 times faster now!!!
Regards
Who is online
Users browsing this forum: acmeconsulting, Majestic-12 [Bot] and 56 guests