-
- Influencer
- Posts: 12
- Liked: never
- Joined: Sep 28, 2011 10:33 am
- Full Name: Luke Whitworth
- Contact:
Source SAN issue
I'm currently investigating a potential speed issue with my production SAN. When I run a Veeam backup job with a completely separate system as the target I'm getting throughput of approx. 78MB/sec in hot add mode. However the bottleneck analysis shows:
Source: 99%
Proxy: 93%
Network: 5%
Target: 23%
Now the reading I've done suggests that a source bottleneck of 99% shows there's a real problem with access speed to the source SAN. However, 78MB/s seems like a pretty good speed so could it just be I'm hitting the limit of the SAN I've got (a Dell MD3200i)?
Cheers,
Luke
Source: 99%
Proxy: 93%
Network: 5%
Target: 23%
Now the reading I've done suggests that a source bottleneck of 99% shows there's a real problem with access speed to the source SAN. However, 78MB/s seems like a pretty good speed so could it just be I'm hitting the limit of the SAN I've got (a Dell MD3200i)?
Cheers,
Luke
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Sep 28, 2011 10:33 am
- Full Name: Luke Whitworth
- Contact:
Re: Source SAN issue
Should have given a bit more info in the original post, source SAN is a Dell MD3200i with 12 x SAS 15K rpm disks in RAID5. Target is a Dell server with 12 x SAS 7.2K disks in RAID5 local storage. Network is 1Gbps.
-
- Novice
- Posts: 7
- Liked: never
- Joined: Jul 08, 2011 11:20 am
- Full Name: Patrick van Beek
- Contact:
Re: Source SAN issue
What are the specs of your backup server?
# cpu's ?
how many network connections to your MD3200i ?
Do you use multipathing to access the SAN?
I think the san should be able to push more than 78MB/sec, altough it's a decent number.
# cpu's ?
how many network connections to your MD3200i ?
Do you use multipathing to access the SAN?
I think the san should be able to push more than 78MB/sec, altough it's a decent number.
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Sep 28, 2011 10:33 am
- Full Name: Luke Whitworth
- Contact:
Re: Source SAN issue
Backup server is a VM with 4 vCPUs and 4GB or RAM.
I do indeed use multipathing to the SAN, there are three hosts in the production environment all of them have two NICs configured for iSCSI. Two of the hosts target two pairs of the SAN controller ports (as the SAN is dual controller) and the third host that runs some higher usage VMs targets the remaing two pairs of ports.
I do indeed use multipathing to the SAN, there are three hosts in the production environment all of them have two NICs configured for iSCSI. Two of the hosts target two pairs of the SAN controller ports (as the SAN is dual controller) and the third host that runs some higher usage VMs targets the remaing two pairs of ports.
-
- Novice
- Posts: 7
- Liked: never
- Joined: Jul 08, 2011 11:20 am
- Full Name: Patrick van Beek
- Contact:
Re: Source SAN issue
in that case, I believe the amount of CPU's is your bottleneck.
I'm currently running a backupjob as I type.
I've got a physical backup server with 2 quadcore CPU's and the CPU is running at 100% (I'm guessing the dedup and compression is to blaim here)
If I monitor network usage, I've got 4 nic's to my iSCSI network connected to an EqualLogic, they run at about 30-40% each.
I had a processing rate of 82MB/sec
During backup I had:
Source 96%
Proxy: 94%
network: 25%
target: 11%
so, similar number that indicate the CPU seems to be the bottleneck.
As test, you could reduce the compression level so see if that would make an impact.
regards,
Patrick
I'm currently running a backupjob as I type.
I've got a physical backup server with 2 quadcore CPU's and the CPU is running at 100% (I'm guessing the dedup and compression is to blaim here)
If I monitor network usage, I've got 4 nic's to my iSCSI network connected to an EqualLogic, they run at about 30-40% each.
I had a processing rate of 82MB/sec
During backup I had:
Source 96%
Proxy: 94%
network: 25%
target: 11%
so, similar number that indicate the CPU seems to be the bottleneck.
As test, you could reduce the compression level so see if that would make an impact.
regards,
Patrick
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Sep 28, 2011 10:33 am
- Full Name: Luke Whitworth
- Contact:
Re: Source SAN issue
Cheers for the suggestion Patrick - sadly it doesn't appear to have made any difference to throughput. The CPUs are no longer maxed out when a job is running, but I'm still seeing the same kind of throughput and the bottleneck is now showing:
Source: 99%
Proxy: 58%
Network: 11%
Target: 25%
Any other ideas?
Regards,
Luke
Source: 99%
Proxy: 58%
Network: 11%
Target: 25%
Any other ideas?
Regards,
Luke
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Source SAN issue
Bottlenecks do not indicate a "problem", it's simply the place that was the "most busy". If your satisfied with the speed based on your hardware then 99% for the source is fine. However, if you have a SAN that can deliver 100MB/s and your seeing 15MB/s and 99% source bottleneck, well, then that's an issue. If your storage can deliver 100MB/s max and your seeing 78MB/s and 99%, well, that might just be the best you'll be able to fully sustain. Every environment has a bottleneck somewhere.
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Source SAN issue
Also remember the storage you are getting backups from is also used in productions, and is running other VMs on it while you run backups.
Even it the storage has a higher throughput, you could get a lower speed from Veeam, but this could not be an issue.
If the test with lower compression rate has shown a dicrease in the cpu usage without higher source speed, than the values you are seeing are probably the maximum you can get from that storage.
As a comparison, we are hitting about 110 mbits from a full SAS 15k lefthad storage via 1gbits iSCSI, so 70/80 is not a bad number.
Luca.
Even it the storage has a higher throughput, you could get a lower speed from Veeam, but this could not be an issue.
If the test with lower compression rate has shown a dicrease in the cpu usage without higher source speed, than the values you are seeing are probably the maximum you can get from that storage.
As a comparison, we are hitting about 110 mbits from a full SAS 15k lefthad storage via 1gbits iSCSI, so 70/80 is not a bad number.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Sep 28, 2011 10:33 am
- Full Name: Luke Whitworth
- Contact:
Re: Source SAN issue
Cheers for the feedback and help all. I had an issue with the last firmware blowing latencies on the san through the roof and I'm trying to work out if it's all back to normal and quite happy now and am concerned that there might still be an issue that Veeam is highlighting for me hence why I started the thread.
If that's a decent enough speed for the hardware installed then I'll stop obsessing quite so much
If that's a decent enough speed for the hardware installed then I'll stop obsessing quite so much
Who is online
Users browsing this forum: No registered users and 29 guests