-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: May 11, 2014 8:52 pm
- Full Name: Eric Singer
- Contact:
Make Veeam push harder
Hi,
Is there anyway through registry modification or the something else to force veeam to push / pull harder?
Here is my setup:
76 disks in a raid 10
4 JBODs
19 disks in each jbod
each jbod connected directory to a single quad port SAS adapter
Proxy server is a dell r720 with dual 8 core proces and 384GB of ram
Benchmarks with iometer
256k sequential read = 4 gigabytes per second
256 sequential write = 1.5GBps
256 random read = 780MBps
64k random read = 350MBps
To the best of my knowledge, Veeam uses a MUCH larger IO size than any of the ones I've listed, meaning my throughput should actually be better or at least the same as above. The problem I'm having is it really seems like Veeam is not pushing hard enough when synthetic rollup's are going on. The CPU isn't stressed, and my queue depth is less than 5. I suspect the latter is the problem, a queue depth of 5 is pretty darn low for a system with this many disks IMO. Is there any way to force Veeam to work push / pull harder (higher queue depth)?
Is there anyway through registry modification or the something else to force veeam to push / pull harder?
Here is my setup:
76 disks in a raid 10
4 JBODs
19 disks in each jbod
each jbod connected directory to a single quad port SAS adapter
Proxy server is a dell r720 with dual 8 core proces and 384GB of ram
Benchmarks with iometer
256k sequential read = 4 gigabytes per second
256 sequential write = 1.5GBps
256 random read = 780MBps
64k random read = 350MBps
To the best of my knowledge, Veeam uses a MUCH larger IO size than any of the ones I've listed, meaning my throughput should actually be better or at least the same as above. The problem I'm having is it really seems like Veeam is not pushing hard enough when synthetic rollup's are going on. The CPU isn't stressed, and my queue depth is less than 5. I suspect the latter is the problem, a queue depth of 5 is pretty darn low for a system with this many disks IMO. Is there any way to force Veeam to work push / pull harder (higher queue depth)?
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Make Veeam push harder
Eric, what backup method are you using (forward-forever/forward/reverse)? Are you referring to synthetic full backups taking long? What are your bottleneck stats for the affected jobs?
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: May 11, 2014 8:52 pm
- Full Name: Eric Singer
- Contact:
Re: Make Veeam push harder
I have two setups:
1. A copy jobs (so that's an always incremental)
2. My "protection" jobs is a standard forward (not forever) inc.
Both are on the same set of disks and both use the same repository, but... my protection job runs at 6PM and my copy job runs at 12PM (the next day). In the case of the copy job, it states the "source" is the bottleneck.
FWIW, while a sythetic is going, I can fire up IOmeter, and easily pull a few hundred MBps, I'd really like to see veeam beat the snot out of my storage. That's why I have so many disks, the main goal was to improve performance.
1. A copy jobs (so that's an always incremental)
2. My "protection" jobs is a standard forward (not forever) inc.
Both are on the same set of disks and both use the same repository, but... my protection job runs at 6PM and my copy job runs at 12PM (the next day). In the case of the copy job, it states the "source" is the bottleneck.
FWIW, while a sythetic is going, I can fire up IOmeter, and easily pull a few hundred MBps, I'd really like to see veeam beat the snot out of my storage. That's why I have so many disks, the main goal was to improve performance.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Make Veeam push harder
So you're concerned about backup copy job performance, right? Does it run within a single storage (using it both as source and target at the same time)? Is it the only job that touches the storage at that time? What is the speed the data are read at? Could you provide full bottleneck stats for it, please?
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: May 11, 2014 8:52 pm
- Full Name: Eric Singer
- Contact:
Re: Make Veeam push harder
Well copy performance is ok, its mostly the synthetic rollup performance that i think should go faster. Everything right now runs off of the same disks and same location. They're separated by repositories, but the repositories them self are on the same server.foggy wrote:So you're concerned about backup copy job performance, right? Does it run within a single storage (using it both as source and target at the same time)? Is it the only job that touches the storage at that time? What is the speed the data are read at? Could you provide full bottleneck stats for it, please?
The speed varies, sometimes its as low as 25MBps (if the data is small) and at times it does get up to 300-400MBps. But my storage is capable of GBps. I don't think you guys actually show performance numbers for the sythetic process its self right? Just the actual copy / backup portion of the job. If so, I'd need guidance on how to find that.
Here is an example of a job that goes at 65MBps
1/21/2015 6:20:34 PM :: Busy: Source 85% > Proxy 16% > Network 65% > Target 21%
Right now I have everything point to a network share (DFS) just so that its easy to move the data around (the data is stored on a clustered file server), but the proxy assigned to the repositories is the active cluster node. I guess that has some potential to slow things down, but then again, it should be looping internally in the networking stack, and my networking is all 10g so 1GBps should be a very doable number.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Make Veeam push harder
So both repositories are CIFS shares? Where Veeam B&R itself is installed?
-
- Influencer
- Posts: 24
- Liked: 7 times
- Joined: May 11, 2014 8:52 pm
- Full Name: Eric Singer
- Contact:
Re: Make Veeam push harder
The veeam server is installed on a VM
The proxy agent is installed on the cluster node that currently holds the storage (physical server to be clear)
The repositories while pointing to a CIFS share host by the cluster above, are statically set to use the proxy server above.
As a quick test I setup a local repository (taking CIFS out of the loop) and I got 865MBps which isn't bad. But that was also a clean copy job (all sequential). I normally don't have issues with that part of the job, its the synthetic process that doesn't seem to be pushing hard enough.
1/22/2015 11:42:40 AM :: Primary bottleneck: Network
1/22/2015 11:42:40 AM :: Busy: Source 40% > Proxy 16% > Network 97% > Target 35%
The proxy agent is installed on the cluster node that currently holds the storage (physical server to be clear)
The repositories while pointing to a CIFS share host by the cluster above, are statically set to use the proxy server above.
As a quick test I setup a local repository (taking CIFS out of the loop) and I got 865MBps which isn't bad. But that was also a clean copy job (all sequential). I normally don't have issues with that part of the job, its the synthetic process that doesn't seem to be pushing hard enough.
1/22/2015 11:42:40 AM :: Primary bottleneck: Network
1/22/2015 11:42:40 AM :: Busy: Source 40% > Proxy 16% > Network 97% > Target 35%
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Make Veeam push harder
eric, few things:
- the tests you did on iometer are not correct to depict a synthetic process. you should use 50% read - 50% writes, random, block size equal to half the storage optimization (if it's the default local target, use 512k).
- half the storage IO is used to read original blocks, so the result should be at least divided by 2.
- CIFS shares involves traffic going on the network back and forth to the proxy that is acting as gateway server (basically it runs the repository role locally to porxy requests to the cifs share). This creates additional latency and slowness, and also blocks need to cross the wire back and forth for any read and write instead of staying local
Surely the local mount is a first good step, but again a sequential write is not good to simulate performances of a synthetic creation.
- the tests you did on iometer are not correct to depict a synthetic process. you should use 50% read - 50% writes, random, block size equal to half the storage optimization (if it's the default local target, use 512k).
- half the storage IO is used to read original blocks, so the result should be at least divided by 2.
- CIFS shares involves traffic going on the network back and forth to the proxy that is acting as gateway server (basically it runs the repository role locally to porxy requests to the cifs share). This creates additional latency and slowness, and also blocks need to cross the wire back and forth for any read and write instead of staying local
Surely the local mount is a first good step, but again a sequential write is not good to simulate performances of a synthetic creation.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Who is online
Users browsing this forum: Bing [Bot], Majestic-12 [Bot], mmerino, vhernandez and 257 guests