Comprehensive data protection for all workloads
Post Reply
kjstech
Expert
Posts: 160
Liked: 16 times
Joined: Jan 17, 2014 4:12 pm
Full Name: Keith S
Contact:

v9 Busy: 99% source > Proxy 3% > Network 0% Target 0%

Post by kjstech »

Hello,

Just put in a new Exagrid backup system in and this has 10gbe interface into a storage network that our Veeam v9 server also has to access to this same storage network and so does another random server that we also pushed out the Veeam proxy service to.

The initial backup did take quite some time, but all forward forever incrementals are all less than 3 1/2 hours to complete, however I'm thinking we could do better with the processing rate which averages 35 - 40 MB/s. Because the source is 97-99% busy in our three jobs I thought deploying an additional backup proxy could help. However when I look at our job logs it seems the default VMware Backup Proxy for disk [hotadd] is used. Shouldn't the system be able to utilize the other proxy simultaneously? If a job has 20 VM's in it and there's two proxies, couldn't the first two VM's in that job be processed at the same exact time?

All settings are set to Exagrid best practices.

Not having any trouble with the incremental backup times, just looking to see that we have the best performance possible.
chrisdearden
Veteran
Posts: 1531
Liked: 226 times
Joined: Jul 21, 2010 9:47 am
Full Name: Chris Dearden
Contact:

Re: v9 Busy: 99% source > Proxy 3% > Network 0% Target 0%

Post by chrisdearden »

the proxy reads from your source storage , not the target. where is said random server ?
dellock6
VeeaMVP
Posts: 6165
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: v9 Busy: 99% source > Proxy 3% > Network 0% Target 0%

Post by dellock6 »

If bottleneck is 99% source, adding new proxies will not help at all, on the other side if they start processing even more VMs, chances are the production storage will be even more impacted and slowed down.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
kjstech
Expert
Posts: 160
Liked: 16 times
Joined: Jan 17, 2014 4:12 pm
Full Name: Keith S
Contact:

Re: v9 Busy: 99% source > Proxy 3% > Network 0% Target 0%

Post by kjstech »

6 esxi hosts, a proxy on host 4 and Veeam itself on host 5. Each host has a 10gig 9000 MTU link to a switch where a EMC VNX5200 NFS storage lies.
alanbolte
Veteran
Posts: 635
Liked: 174 times
Joined: Jun 18, 2012 8:58 pm
Full Name: Alan Bolte
Contact:

Re: v9 Busy: 99% source > Proxy 3% > Network 0% Target 0%

Post by alanbolte »

kjstech wrote:However when I look at our job logs it seems the default VMware Backup Proxy for disk [hotadd] is used. Shouldn't the system be able to utilize the other proxy simultaneously? If a job has 20 VM's in it and there's two proxies, couldn't the first two VM's in that job be processed at the same exact time?
Double-check that this new proxy meets the requirements for hotadd. Also, try temporarily forcing the job to use that proxy to see if it works at all, and if so, what transport mode it uses.
kjstech
Expert
Posts: 160
Liked: 16 times
Joined: Jan 17, 2014 4:12 pm
Full Name: Keith S
Contact:

v9 Busy: 99% source > Proxy 3% > Network 0% Target 0%

Post by kjstech »

So for load balancing I have 3 NFS data stores on 3 subnets with 3 vmkernel adapters. This was done thanks to Chris Wahl's extensive testing on optimizing the traffic over NFS. ESXi 5.0 does not use multiple connections in a single NFS data store (or something to that nature) so because of this I have 3 file systems and 3 veeam jobs, and 3 Exagrid repositories. It seems like the first job kicks off and it uses the local Veeam backup in hotadd. An hour later the next backup job kicks off and it sometimes will use a mixture of the proxy in hot add or itself in hot add. Then 4 hours later the last file system backup begins and by that time the first two are done. So this last one only uses the Veeam built in proxy in hotadd.

I have to add that the Exagrid repos are all set to allow 1 concurrent run at a time (thus is why there's three repos, one for each job / file system). Maybe this is why? It's according to their best practices document. I suspect it's for the way the Veeam accelerated data mover accepts the information and the appliance runs the dedupe process on the backend.

However I'm not complaining here because all backups are done within 4-5 hours or so. It's just that very first initial seed when we installed the appliance that it took about 36 hours.

The 10gbe interface plots as high as 600 Mbps. Previously we backed up to an old Dell 2900 loaded with FreeNAS and it was on a 1gbe interface that plotted as high as 120 Mbps, so we've seen improvement there at least.

IOMeter on a VM shows about 1800 IOPs (256k block) to the NFS backed storage vmdk on the VNX5200. A huge improvement to the old EMC NX4 we used a few years ago.
dustinn3
Influencer
Posts: 22
Liked: 6 times
Joined: Oct 14, 2013 1:53 pm
Full Name: Dustin Newby
Contact:

Re: v9 Busy: 99% source > Proxy 3% > Network 0% Target 0%

Post by dustinn3 »

I'm seeing similar results from my VNX5300 to a DD2500 using 10gbe across the board. I'm using ISCSI. 43MB/s Busy:Source 98% > Proxy 0% > Target 0%.
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Majestic-12 [Bot], MarkusN, Semrush [Bot] and 134 guests