Hi All,
I have 4 physical disks 7200 RPM HDD 1TB connected to PERC H710 Mini on a Dell Server, all disks are in RAID0 connected to a SAS/SATA interface. The NIC ports are all 1Gbps, and the LAN cable is CAT5e.
The SAN&NAS running as a VM with Physical Disk 2 which is then configured as iSCSI storage on the ESXi, and VMs stored on them.
Veeam is running on Windows Server on Physical Disk 3, so Veeam backs up VMs from Disk 3 to Disk 2 over a 1Gbps connection but the speed of backup does not go above 6MBps. Now 1Gbps throughput should give at the very least ~75 to 90MBps unless some other component in the network is the issue.
I have configured my network as
The physical switch is a Gigabit Switch as well.
Veeam Throughput -
I have read this [ veeam-backup-replication-f2/read-this-f ... tml#p95291 ] and it says :
"Source" is the source (production) storage disk reader component. The percent busy number for this component indicates percent of time that the source disk reader spent reading the data from the storage. For example, 99% busy means that the disk reader spent all of the time reading the data, because the following stages are always ready to accept more data for processing. This means that source data retrieval speed is the bottleneck for the whole data processing conveyor. As opposed to that, 1% busy means that source disk reader only spent 1% of time actually reading the data (because required data blocks were retrieved very fast), and did nothing the rest of the time, just waiting for the following stages to be able to accept more data for processing (which means that the bottleneck is elsewhere in the data processing conveyor).
All I could understand so far is that disk Read Speed is slow as its an HDD ?!
Thank You
-
- Novice
- Posts: 7
- Liked: never
- Joined: Jan 18, 2021 9:32 am
- Full Name: tryllz
- Contact:
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Veeam Backup Crawling on 1Gbps Connection ?!
Hi, thanks for checking out FAQ before posting. You should also review the Processing Modes section to ensure you're using the most optimal one for your situation. Try to experiment with different ones to see if there's an improvement. In general, as long as the primary storage and network fabric allow for this, Veeam can do over 10GB/s with a single backup proxy.
Please note however that this is not a support forum + we're unable to troubleshoot issues with your storage and network infrastructure over forum posts in any case. Also, kindly include a support case ID for the issue above, as requested in red text when you click New Topic. Otherwise this topic will eventually be removed by moderators.
Thanks!
Please note however that this is not a support forum + we're unable to troubleshoot issues with your storage and network infrastructure over forum posts in any case. Also, kindly include a support case ID for the issue above, as requested in red text when you click New Topic. Otherwise this topic will eventually be removed by moderators.
Thanks!
-
- Expert
- Posts: 127
- Liked: 29 times
- Joined: Oct 10, 2014 2:06 pm
- Contact:
Re: Veeam Backup Crawling on 1Gbps Connection ?!
I'm not quite sure what to make of your design. Even if you don't specify the disks by type, the only 7200rpm SAS disks I've ever seen are nearline SAS, not the 'real deal'. Ie SATA disks with some kind of a signal converter, not having the all SAS features. But even if they are proper SAS disks, 4 spindle disks, even in RAID0 (auch by the way, I can only really, really hope is is the test setup for the proof of concept of an environemnt that's only going to be QA at most) you'll have about 400-500 iops at best. Which is not a lot.
But it seems you are running these VM's on the very same host as well. So you write you have the 4 disks in RAID0, but you also say you pass through a physical disk to each of the VM's. So what is it? Are you having virtual disks, residing on that RAID0 of 4 disks and present a virtual disk to each of the VM's? Or are you actually presenting one single physical disk to each of the VM's? Either way, you'll have an extremely low number of IOPS to play around with. In the case you have one 'big' RAID0 array, from which you present virtual disks to the VM's it's probably even worse as you'll have to split the IOPS with all nodes. So from the physical disk perspective, there would be practically no sequential IOPS when reading from your VM and writing to Veeam, even if from that VM and Veeam perspective it is sequential.
So in other words, I think you'll have to be more specific on your setup. But either way, even for a test setup, it is probably way, way underpowered and as it seems you don't care about redundancy here, you'd probably be better of buying even a single SSD to replace the physical disks.
But it seems you are running these VM's on the very same host as well. So you write you have the 4 disks in RAID0, but you also say you pass through a physical disk to each of the VM's. So what is it? Are you having virtual disks, residing on that RAID0 of 4 disks and present a virtual disk to each of the VM's? Or are you actually presenting one single physical disk to each of the VM's? Either way, you'll have an extremely low number of IOPS to play around with. In the case you have one 'big' RAID0 array, from which you present virtual disks to the VM's it's probably even worse as you'll have to split the IOPS with all nodes. So from the physical disk perspective, there would be practically no sequential IOPS when reading from your VM and writing to Veeam, even if from that VM and Veeam perspective it is sequential.
So in other words, I think you'll have to be more specific on your setup. But either way, even for a test setup, it is probably way, way underpowered and as it seems you don't care about redundancy here, you'd probably be better of buying even a single SSD to replace the physical disks.
-
- Novice
- Posts: 7
- Liked: never
- Joined: Jan 18, 2021 9:32 am
- Full Name: tryllz
- Contact:
Re: Veeam Backup Crawling on 1Gbps Connection ?!
Thanks for clarifying @RGijsen,
Sorry for the delayed response, this is a test lab so redundancy/IOPS does not matters.
Anyhow, I redesigned how the network is physically set up all the to virtual for better understanding.
The physical disks (configured as RAID0 in iDRAC) are being presented as Virtual Disks to ESXi on Dell, the ESXi presents these disks to StarWind SAN & NAS who then again presents them as virtual disks to vCenter Hosts.
But here is the thought I'm having, as vCenter is behind a 100 Mbps connection and Veeam behind a 1 Gbps, could this be limiting factor ?1, point being is Veeam traversing traffic through vCenter to take the backup ? The reason for this thought is I removed everything from Dell/ESXi and added 2 VMs to test bandwidth on iPerf an it was 815 Mbps.
Sorry for the delayed response, this is a test lab so redundancy/IOPS does not matters.
Anyhow, I redesigned how the network is physically set up all the to virtual for better understanding.
The physical disks (configured as RAID0 in iDRAC) are being presented as Virtual Disks to ESXi on Dell, the ESXi presents these disks to StarWind SAN & NAS who then again presents them as virtual disks to vCenter Hosts.
But here is the thought I'm having, as vCenter is behind a 100 Mbps connection and Veeam behind a 1 Gbps, could this be limiting factor ?1, point being is Veeam traversing traffic through vCenter to take the backup ? The reason for this thought is I removed everything from Dell/ESXi and added 2 VMs to test bandwidth on iPerf an it was 815 Mbps.
Who is online
Users browsing this forum: No registered users and 69 guests