Comprehensive data protection for all workloads
Post Reply
jadams159
Enthusiast
Posts: 80
Liked: 4 times
Joined: Apr 16, 2012 11:44 am
Full Name: Justin Adams
Location: United States
Contact:

Target a bottleneck?

Post by jadams159 »

I have one Veeam v7 server and an additional proxy backing up to a Data Domain 2500. My jobs report that the data domain is the bottleneck, however I can push more and more data to the data domain by starting additional backup jobs and by copying data to it via CIFS and it doesn't skip a beat. All of the resources (CPU and Disk) on the Data Domain are very low and I'm connected via 10GB fiber. I don't have any throttling enabled and I'm wondering why a single job won't run faster?
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Target a bottleneck?

Post by tsightler »

Are you using parallel processing in V7? I'm thinking not since you mention that you can start additional backup jobs to increase the data flow. With V7 and parallel processing a single job should be sending multiple streams. Also, is this a full backup?
jadams159
Enthusiast
Posts: 80
Liked: 4 times
Joined: Apr 16, 2012 11:44 am
Full Name: Justin Adams
Location: United States
Contact:

Re: Target a bottleneck?

Post by jadams159 »

I am using parallel processing.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Target a bottleneck?

Post by tsightler »

So how many tasks do you have running against the target? The default is 4, but you can certainly increase it. I was confused by your statement that you can start more backup jobs to increase performance as, if parallel processing is enabled, I would expect those jobs to just queue up behind the current active one. Perhaps you can provide a little more detail. What performance are you seeing? What about bottleneck stats?
jadams159
Enthusiast
Posts: 80
Liked: 4 times
Joined: Apr 16, 2012 11:44 am
Full Name: Justin Adams
Location: United States
Contact:

Re: Target a bottleneck?

Post by jadams159 »

Job 1:
VM A
vmdk0:0 - 80GB
vmdk0:1 - 900GB
vmdk0:2 - 500GB

Proxy 1 picked up 0:0, Proxy 2 picked up 0:1, when Proxy 1 was done with 0:0 it picked up 0:2.
0:0 - 52MB/s
0:1 - 54MB/s
0:2 - 46MB/s

While this Job 1 is running I can start Job 2 and the server and proxy will pick up vmdk's from Job 2 and start processing them with little impact on the original job. The Data domain ingest numbers (CPU, disk and network) almost double but are still well below a max as I can copy a 100GB file to it and that does not affect backup performance in any negative way so it doesn't appear that the DD2500 is a bottleneck.

An example of of the Load from a job reads like this: Source:39% > Proxy:74% > Network:30% > Target:80%

No matter what I throw at the Data Domain, it just soaks it up, and I would think if it was under too much load the back up job's throughput would drop.
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Target a bottleneck?

Post by dellock6 »

Two quick checks:
- what kind of backup are you running? Forward or reversed?
- how did you connected the data domain? CIFS or NFS mounted to a Veeam repository?

I'm asking those two because usually are the most common situations where a DD (or any dedup appliance to be honest) could have some issues.

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Target a bottleneck?

Post by Gostev »

jadams159 wrote:My jobs report that the data domain is the bottleneck, however I can push more and more data to the data domain by starting additional backup jobs and by copying data to it via CIFS and it doesn't skip a beat.
I believe this is well known DataDomain design peculiarity. It takes multiple write streams to saturate DataDomain.
tsightler wrote:With V7 and parallel processing a single job should be sending multiple streams.
Multiple read streams yes, but single write stream still.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Target a bottleneck?

Post by tsightler »

So do your jobs only have a single VM each? If you put the two VMs into one job it should just automatically use the extra proxy resources. There are some limitations with hotadd and parallel processing that limit the ability to process mulitple VMDK from the same VM on the same proxy (other modes do not have this limitation). This is why you are seeing the behavior described above since you only have two proxies and one VM in the job. If the job had more than one VM it would start the backup of the first two disk from the first VM and begin processing the disks from the next VM concurrently.

The bottleneck statistics are simply a measure of the amount of time Veeam spent waiting on any particular portion of the chain for the current load. Sometimes these numbers aren't "linear" compared to the disk load. Veeam is simply asking Windows to write data to the file, so this measure will include all of the CIFS overhead and other delays. For example, you're currently running at 50MB/s, and showing 80%, but it's possible that you might be able to run at 100MB/s an only show 90% wait at the target.

I would suggest we focus on increasing the parallelism. I'm also somewhat surprised at the proxy usage being so high, I would not expect this at the relatively low performance you are running. How many vCPUs do you have on your proxies? Also, just to be 100% sure, this is a full/incremental backup, not a reverse incremental, correct?
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Target a bottleneck?

Post by tsightler »

Gostev wrote:Multiple read streams yes, but single write stream still.
Indeed, but I'm surprised to see a single write stream be so limited given his hardware. There can certainly come a point where mulitple write streams will help as well.
jadams159
Enthusiast
Posts: 80
Liked: 4 times
Joined: Apr 16, 2012 11:44 am
Full Name: Justin Adams
Location: United States
Contact:

Re: Target a bottleneck?

Post by jadams159 »

I'm using forward incrementals, but I'm running these tests by initiating active fulls.

I'm using CIFS share from the Data Domain.

Job1 has that one large VM. Job 2 has multiple VMs with smaller single disks. While job1 is being processed by the veeam server and proxy1, the veeam server and proxy1 will each pick up a vmdk from a vm in the second job.

Veeam server has 4vCPU, Proxy has 2 vCPU/
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Target a bottleneck?

Post by tsightler »

Right, so I'm suggesting putting all VMs into a single job to see what happens. It may provide very similar performance to having both jobs at the same time, or it may not. Gostev's statement above is correct though, dedupe appliances are normally designed for best performance from many concurrent streams, so it's not completely surprising that running multiple concurrent jobs will push the performance higher.
jadams159
Enthusiast
Posts: 80
Liked: 4 times
Joined: Apr 16, 2012 11:44 am
Full Name: Justin Adams
Location: United States
Contact:

Re: Target a bottleneck?

Post by jadams159 »

The backup is still running, but it appears that I was getting more throughput running multiple jobs concurrently, compared to running many VMs in one job. (Same VMs) I don't understand why the additional resources that are freed up when other VMs complete aren't allocated to the one VM that is still being processed.

Ultimately, I'm just trying to shorten my backup window.

This is frustrating because I used to have one veeam server backing up my entire infrastructure to local storage leveraging reverse incrementals. I was running backups every night, and completing well before the start of the next day. The only problem was that I couldn't keep as many restore points as I wanted because I was using so much storage.

I moved to the Data Domain and started using Forward Incrementals. Now I have all the storage in the world, but my Active Fulls are taking WAY too long and I haven't even introduced 50% of my infrastructure to the new scheme.
chrisdearden
Veteran
Posts: 1531
Liked: 226 times
Joined: Jul 21, 2010 9:47 am
Full Name: Chris Dearden
Contact:

Re: Target a bottleneck?

Post by chrisdearden »

are your proxy servers using hotadd? there is a restriction in parallel processing that a hotadd proxy can only process 1 disk from a given VM at a time ( its an API thing :) ) - proxies running in network mode or direct SAN can process multiple VMDK from a given VM in parallel.

You could add an additional proxy which would help with the large job or try network mode.
have you any limits set on the repository as far as numbers of tasks ?
jadams159
Enthusiast
Posts: 80
Liked: 4 times
Joined: Apr 16, 2012 11:44 am
Full Name: Justin Adams
Location: United States
Contact:

Re: Target a bottleneck?

Post by jadams159 »

We are using Hot Add. (But this was determined by Veeam as it is set as "Automatic")
Max Concurrent Tasks is set to 4.

Any suggestions on what I should try next? Changing to Network mode? Or perhaps changing the max ingestion rate to something higher that my 10Gb connection can handle?
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Target a bottleneck?

Post by tsightler »

jadams159 wrote:The backup is still running, but it appears that I was getting more throughput running multiple jobs concurrently, compared to running many VMs in one job. (Same VMs) I don't understand why the additional resources that are freed up when other VMs complete aren't allocated to the one VM that is still being processed.
As I stated above, with hotadd only a single VMDK can be processed per-proxy due to limitation of the VDDK so it's only going to process two VMDK's from the same VM with your current setup. Is you're entire network 10Gb (including ESXi management interfaces)?
jadams159
Enthusiast
Posts: 80
Liked: 4 times
Joined: Apr 16, 2012 11:44 am
Full Name: Justin Adams
Location: United States
Contact:

Re: Target a bottleneck?

Post by jadams159 »

Every host physical NIC's actual speed is: 20000Mb Full Duplex.

I can't find a place to confirm the actual speed of the virtual adapter of the Management IP.
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Google [Bot] and 67 guests