-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: May 11, 2014 8:52 pm
- Full Name: Eric Singer
- Contact:
Slower than expected backup
Hi folks,
We're not getting quite the throughput I'd expect out of our backup infrastructure, and its hard to determine why exactly. The bottleneck is listed as source, but I find that very hard to believe.
As an example, we have a VM that's 3.1TB. A full backup took 4 hours and 20 minutes at an average speed of 214MBps. My SAN can go A LOT faster than that, and I've tested it multiple times with IOmeter. I have seen it go as high as 2GBps and it has no problem at all averaging 1 - 1.6GBps during sequential workloads. I was watching the backup while it was going on, my SAN isn't even breaking a sweat (<5ms latency). Everything network wise is 10g and all links were wide open. In the job i can see it peaked for a very short burst to 675MBps so its clear to me, that it can go fast, i just don't know why it's not consistent. The backup destination is also 10g and I have also bench marked that at 1GBps write, so if nothing else, I'd like to think that the bottleneck would either be the destination or the proxy.
A few additional notes:
1. We're using hot add
2. The proxy server has 12 vCPU's and 32GB of memory
3. I mentioned this, but I will again, everything is connected by 10g, no 1g in the mix at all.
4. Job setting are all defaults, no additional compression or deduplication.
5. Switching is all wire speed (no over subscribed ports)
6. VMware we have NIOC and SIOC enabled, but but we're not even reaching levels where they should need to throttle anything. Also, these hosts are 5.5 in case that matters.
7. Full Veeam infrastucture consists of 3 proxies (identical to the ones above), 5 sans (Nimble cs460 x2), and 4 VMware hosts (dual 12 core procs with 768GB of RAM). I have anti affinity rules for the Veeam proxies so they don't run on the same hosts.
I'm wondering if perhaps there are some default (conservative) Veeam or VMware settings that need to be tweaked? I'd really like to see the average throughput around 500MBps at a minimum, would really want to see 1GBps+. In fact, i'd like to see 1GBps+ per veeam proxy (assuming the jobs are pulling VM's from different SAN's at a given time).
We're not getting quite the throughput I'd expect out of our backup infrastructure, and its hard to determine why exactly. The bottleneck is listed as source, but I find that very hard to believe.
As an example, we have a VM that's 3.1TB. A full backup took 4 hours and 20 minutes at an average speed of 214MBps. My SAN can go A LOT faster than that, and I've tested it multiple times with IOmeter. I have seen it go as high as 2GBps and it has no problem at all averaging 1 - 1.6GBps during sequential workloads. I was watching the backup while it was going on, my SAN isn't even breaking a sweat (<5ms latency). Everything network wise is 10g and all links were wide open. In the job i can see it peaked for a very short burst to 675MBps so its clear to me, that it can go fast, i just don't know why it's not consistent. The backup destination is also 10g and I have also bench marked that at 1GBps write, so if nothing else, I'd like to think that the bottleneck would either be the destination or the proxy.
A few additional notes:
1. We're using hot add
2. The proxy server has 12 vCPU's and 32GB of memory
3. I mentioned this, but I will again, everything is connected by 10g, no 1g in the mix at all.
4. Job setting are all defaults, no additional compression or deduplication.
5. Switching is all wire speed (no over subscribed ports)
6. VMware we have NIOC and SIOC enabled, but but we're not even reaching levels where they should need to throttle anything. Also, these hosts are 5.5 in case that matters.
7. Full Veeam infrastucture consists of 3 proxies (identical to the ones above), 5 sans (Nimble cs460 x2), and 4 VMware hosts (dual 12 core procs with 768GB of RAM). I have anti affinity rules for the Veeam proxies so they don't run on the same hosts.
I'm wondering if perhaps there are some default (conservative) Veeam or VMware settings that need to be tweaked? I'd really like to see the average throughput around 500MBps at a minimum, would really want to see 1GBps+. In fact, i'd like to see 1GBps+ per veeam proxy (assuming the jobs are pulling VM's from different SAN's at a given time).
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Slower than expected backup
Eric, what if you try to use network transport mode instead of hotadd?
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: May 11, 2014 8:52 pm
- Full Name: Eric Singer
- Contact:
Re: Slower than expected backup
I'll let you know after I run it against something with a larger block of changes to backup. Just a quick test on something with a low test rate didn't show any improvement (other than not having to add the disks). Ironically, the job i used as a test last night ran at 380MBps, which isn't what I'd like to see, but certainly a lot faster than some of the other stuff. Very strange the way the performance is all over the place.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Slower than expected backup
What was the bottleneck in this case?
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: May 11, 2014 8:52 pm
- Full Name: Eric Singer
- Contact:
Re: Slower than expected backup
Load: Source 98% > Proxy 43% > Network 10% > Target 8%
BTW, i don't get this either, why aren't these part of a total that = 100%? Meaning, 98 + 43 + 10 + 8 doesn't not equal 100?
BTW, i don't get this either, why aren't these part of a total that = 100%? Meaning, 98 + 43 + 10 + 8 doesn't not equal 100?
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Slower than expected backup
No, it does not. Here's probably even better explanation from Tom.
Who is online
Users browsing this forum: olafurh and 62 guests