Comprehensive data protection for all workloads
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Hello everybody,

i have installed a new Veeam Backup & Replication server using v9.0.0.902 on a physical HP ProLiant DL380 G6 server with 2 CPU's (each having 8 cores with HT), with 72GB of RAM and a very fast MSA with 4.1TB disk space directly attached. The server is running Windows Server 2012 R2. Our 2-node ESXi-cluster consists of HP ProLiant DL380 G7 servers with 2 CPU's (each having 12 cores with HT) and with 144GB of RAM. Both ESXi-hosts are attached to a NetApp FAS2040 (with 2 additional shelfs) via FC. I have deployed 2 VMware Backup Proxies (one per ESXi-host) with 4 vCPU's and 4GB of RAM each running Windows Server 2012 R2 as well.

The VBR 9.x server is attached with 4GBit/s (using 4x 1GBit/s NIC in a Cisco Etherchannel), both ESXi-hosts have a vSwitch (for VM network) with a 4GBit/s trunk attached to the same switch, as the VBR server, so theoretical NIC-performance in each VMware Backup Proxy could reach 1GBit/s (deduplication not considered), right?

A disk backup job is configured to use those proxies for HOTADD transport mode and during backup the statistic window reports HOTADD as transport mode, but transfer speed is very low actually (30MB/s only).

Benchmarking disk performance in the backup proxy nearly gives me 280MB/s read performance, the backup repository on the physical VBR-server nearly gives me 2700Mb/s write performance.

The bottleneck is SOURCE with 99%.

What might cause the bottleneck in this environment? Has anyone any clues?

Regards,
Didi7
Using the most recent Veeam B&R in many different environments now and counting!
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Furthermore I would like to inform you that I was forced to use Backup Exec 15 V-Ray before, which makes it impossible to use HOTADD, as the Backup Exec 15 V-Ray server was physical. Of course NBD transport mode was used, which gave me around 30 to 40MB/s at maximum during backup of VM's with VADP. When I used the Backup Exec Remote Agent, I got around 100MB/s or 6000MB/Min transfer speed.

Something is really fishy here atm.

Disk benchmarks in the VMware Backup Proxies nearly gives me 280MB/s, but transfer speed is limited to 50MB/s (last VM I used for backup with VMDK's which reside on a NetApp RAID-DP SAS-volume).

Are there any registry keys, that can be used to boost performance, when using Virtual Appliance mode?
Using the most recent Veeam B&R in many different environments now and counting!
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Btw, is it true, that VBR 9.x automatically disables CBT on VM's (being part of a VBR disk backup job), which are VMware Backup Proxies used by the VBR server or do I have to interact myself?
Using the most recent Veeam B&R in many different environments now and counting!
DaveWatkins
Veteran
Posts: 370
Liked: 97 times
Joined: Dec 13, 2015 11:33 pm
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by DaveWatkins »

You shouldn't need 2 proxies since each HOST can see all the LUN's a single proxy should work fine.

VBR doesn't disable CBT either, it will use it for all jobs after the first one, If you can connect your Backup server to the FC you could use direct SAN mode, but I doubt that's going to resolve the problem, something is definitely not right there if your numbers are correct
alanbolte
Veteran
Posts: 635
Liked: 174 times
Joined: Jun 18, 2012 8:58 pm
Full Name: Alan Bolte
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by alanbolte »

Didi7 is correct that CBT is automatically disabled on VMs used as backup proxies.

Is there a difference in performance between your incremental backups and active full backups? You can check your session history rather than run a new full for comparison.
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

alanbolte wrote:Didi7 is correct that CBT is automatically disabled on VMs used as backup proxies.
So, to make it absolutely clear. If a VMware Backup Proxy tanked with Veeam Transport Agents is part of a VBR disk backup job, VBR then automatically disables CBT for that particular VM, even thought CBT is enabled globally in this job, right?
DaveWatkins wrote:You shouldn't need 2 proxies since each HOST can see all the LUN's a single proxy should work fine.
I know, but it makes sense for redundancy reason and for performance reason, cause the 2nd VMware Backup Proxy can use another 1GBit/s vNIC than the first one, don't you agree?
DaveWatkins wrote:If you can connect your Backup server to the FC you could use direct SAN mode
Unfortunately not possible, as the NetApp FAS2040 is directly connected to the ESXi FC HBA's. Maybe later, when the third ESXi-host enters this cluster.
Using the most recent Veeam B&R in many different environments now and counting!
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Delo123 »

Anyway it would be good to know where you see these speeds (incrementals or fulls), as incrementals skip a lot of data thus pulling down your transfer rate (we do fulls around 1GB/s, incrementals rarely go over 200-400MB/s)
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Delo123 wrote:Anyway it would be good to know where you see these speeds (incrementals or fulls)
I created a completely new disk backup job and right click 'Start', so I am talking about Full backups.
Using the most recent Veeam B&R in many different environments now and counting!
foggy
Veeam Software
Posts: 21070
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by foggy »

Source reported as bottleneck means that the source data retrieval speed is the bottleneck. You should pay attention to the source storage, probably try to update the firmware and make sure you're using latest storage drivers.
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Yes, I have read that so many times now, when I was searching the Internet for a solution for the poor performance I get.

Do you have an explanation, why FTP-transfer from the virtual VMware Proxy Backup to the physical VBR server is 100MB/s or why a backup with a Veritas Backup Exec Remote Agent gives me 100Mb/s or 6000MB/Min transfer speed and VBR HOTADD is so slow?

I just found the info about storage that's connected via iSCSI and how you can trim performance with some parameters, but instead the storage is directly connected with 4GBits/s FC to both ESXi-hosts, so that VMware KB is useless in this scenario.
Using the most recent Veeam B&R in many different environments now and counting!
foggy
Veeam Software
Posts: 21070
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by foggy »

Didi7 wrote:Do you have an explanation, why FTP-transfer from the virtual VMware Proxy Backup to the physical VBR server is 100MB/s
This is actually the next hop in the data processing chain, which is referred as Network in the bottleneck statistics. Your bottleneck is on the stage of retrieving the data from the source storage and transferring it to the proxy.
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

The data, that is retrieved through the FTP-Server is located on the NetApp storage, so IMHO this is the Source and not the Network.
Using the most recent Veeam B&R in many different environments now and counting!
foggy
Veeam Software
Posts: 21070
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by foggy »

I'd try to switch your jobs to NBD mode and see what performance you get.

Btw, when you say 30MB/s, do you mean the entire job processing rate or the hard disk read speed?
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

I got around 50MB/s on a VM, which VMDK's reside on a NetApp SAS-disks. With NBD transport mode, I get around 30MB/s. Still, I don't understand, why retrieving data through the FTP-server nearly gives me 100MB/s (native Speed without deduplication of course) and why a Veritas Backup Exec Remote Agent also nearly gives me 6000MB/Min on the same VM.
Using the most recent Veeam B&R in many different environments now and counting!
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Delo123 »

Did you try enabling parallel processing and backup multiple vm's / disks at once? 30MB/s sound like a single vm/disk...
jveerd1
Service Provider
Posts: 52
Liked: 10 times
Joined: Mar 12, 2013 9:12 am
Full Name: Joeri van Eerd
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by jveerd1 » 1 person likes this post

Do not get confused with the way Backup Exec measures the processing rate. A fair comparison would be to run an active full backup and measure the time it takes for the backup to complete.

That said, monitor your NetApp when running your backup to determine the bottleneck. A simple sysstat will probably reveal possible bottlenecks pretty quick.
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

The
Delo123 wrote:Did you try enabling parallel processing and backup multiple vm's / disks at once? 30MB/s sound like a single vm/disk...
Parallel Processing in Global Options tab 'I/O Control' is enabled by default here, though I cannot see that parallel processing is actually working, because the job statistics reports processing sequentially. The virtual VMware Backup Proxies allow 4 concurrent tasks, but the Backup Repository allows only 1 task atm, that's why it's sequentially.

I have another VMware cluster environment with a similar configuration (instead of 4GBit/s FC, iSCSI with 10GBit/s is used to connect to the storage) and HP ProLiant DL380p G8 servers with ESXi-hosts connected to an MSA2040 and VBR 8.x in production since the end of 2014, which is much faster (partly over 200MB/s) using HOTADD transport mode. HDD processing is sequentially here as well.
Using the most recent Veeam B&R in many different environments now and counting!
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Delo123 »

Maybe a stupid question but why do you only allow 1 concurrent task?
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

jveerd1 wrote:Do not get confused with the way Backup Exec measures the processing rate. A fair comparison would be to run an active full backup and measure the time it takes for the backup to complete.

That said, monitor your NetApp when running your backup to determine the bottleneck. A simple sysstat will probably reveal possible bottlenecks pretty quick.
Do you think, Backup Exec transfer rates (with a traditional Remote Agent) in the job result statistics are not true?

Monitor the FAS2040 NetApp with 'sysstat' is a nice advice. Do you have FAS2040 NetApp experience with VBR-backups?
Using the most recent Veeam B&R in many different environments now and counting!
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Delo123 wrote:Maybe a stupid question but why do you only allow 1 concurrent task?
Well, the new VMware cluster enviroment got a similar hardware configuration, as the first cluster enviroment from the end of 2014, which uses HP DL380p G8 servers, instead of HP DL380 G7 and an HP MSA2040 storage directly connected via 10GBit/s iSCSI, instead of a NetApp FAS2040 directly connected via 4GBit/s FC.

The new physical VBR backup server now uses VBR 9.x instead of 8.x and I tried to use the same VBR configuration, I used in my production environment from the end of 2014 and in this production environment, 2 disk backup jobs run at the same time, putting data in different backup repositories, but only one task is allowed per backup repository atm, as physically this is the same RAID-5 disk. In this production environment I get transfer speeds of more than 100MB/s till over 200MB/s.

Of course, for a test I can configure the backup repository to allow 2 or more tasks at once.

Since the new VMware cluster environment with the physical VBR 9.x server is still not in production, I delete backups before I restart the jobs, so the disk backup jobs are always FULL backup jobs.
Using the most recent Veeam B&R in many different environments now and counting!
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

foggy wrote:Well, in that case I'd try to switch your jobs to NBD mode and see what performance you get.

Btw, when you say 30MB/s, do you mean the entire job processing rate or the hard disk read speed?
With NBD transport mode, processing is limited to around 30MB/s, if I recall correctly.

The last disk backup job (for a VM, that's located on a datastore running on a NetApp RAID-DP consisting of 15K RPM SAS-HDD's) I got the following statistics ...

Processing rate 59MB/s
Load: Source 96% > Proxy 28% > Network 4% > Target 2%
Hard disk 1 (50.0 GB) 14.6 GB read at 47 MB/s [CBT]
Hard disk 2 (50.0 GB) 45.3 GB read at 64 MB/s [CBT]

using HOTADD transport mode.

Now, I am doing the same job with Parallel Processing allowed in the backup repository and post the results here again.
Using the most recent Veeam B&R in many different environments now and counting!
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Delo123 »

Ok, so in this case harddisk 1 and 2 actually ran sequentially?
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Delo123 wrote:Ok, so in this case harddisk 1 and 2 actually ran sequentially?
Yes, just to have the same configuration, as the one in my production environment, where I get more than 4 times the transfer speed processing HDD's sequentially.
Using the most recent Veeam B&R in many different environments now and counting!
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Delo123 »

Ok, because i am not sure if processing rate is a reliable counter for "absolute" speed measuring...

because for instance i have a job backing up a single vm with 5 disks in parallel:

Processing rate 504MB/s
Hard disk 5 (1000,0 GB) 183,0 MB read at 106 MB/s [CBT]
Hard disk 3 (1,3 TB) 56,2 GB read at 246 MB/s [CBT]
Hard disk 2 (1,3 TB) 56,1 GB read at 246 MB/s [CBT]
Hard disk 4 (1000,0 GB) 187,0 MB read at 101 MB/s [CBT]
Hard disk 1 (40,0 GB) 2,3 GB read at 127 MB/s [CBT]
Busy: Source 69% > Proxy 56% > Network 10% > Target 3%

Processing rate is 504MB/s but combined rate of all disks is around 825MB/s which is obviously higher...
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Ok, here are the results of using the same disk backup job (older backups were deleted, so this is a full backup job again) with 5 concurrent tasks allowed in the backup repository ...

Processing rate: 75MB/s
Load: Source 96% > Proxy 24% > Network 5% > Target 4%
Hard disk 1 (50.0 GB) 14.5 GB read at 41 MB/s [CBT]
Hard disk 2 (50.0 GB) 45.3 GB read at 57 MB/s [CBT]

Hard disks were processed parallely using 2 different VMware Backup Proxies.

I must admit, this is in improvement, but still far beyond the transfer speed of FTP-transfer speed of 100MB/s and Backup Exec Remote Agent speed of 6000MB/Min and only because parallel processing was allowed.

Both hard disks are located in different datastores and LUNs, but physically are located in the same NetApp RAID-DP SAS-disk-aggregate. Obviously the NetApp FAS2040 can provide more speed, but the VMware Backup Proxy cannot provide it.

NetApp FAS2040 sysstat output during VBR backup is as follows ...

Code: Select all

fas2040ctrl2> sysstat
 CPU     NFS    CIFS    HTTP     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache
                                             in     out     read  write    read  write    age
 23%       0       0       0       1      2   94727    494       0      0     1
 20%       0       0       0       0      0   83244    341       0      0     1
 20%       0       0       0       0      0   79329    555       0      0     1
 19%       0       0       0       1      3   78977    198       0      0     1
 19%       0       0       0       0      0   75211    582       0      0     1
 23%       0       0       0       1      1   94549    450       0      0     1
 21%       0       0       0       0      0   87186    797       0      0     1
  7%       0       0       0       1      3   28894    265       0      0     1
 15%       0       0       0       0      0   51781   2066       0      0     0s
 19%       0       0       0       0      0   75069   2219       0      0     0s
 19%       0       0       0       0      1   75009    231       0      0     2
 18%       0       0       0       1      3   69082   3693       0      0     0s
 18%       0       0       0       0      0   76139    351       0      0     0s
 17%       0       0       0       0      0   74701    323       0      0     0s
 17%       0       0       0       0      0   72713    718       0      0     3
 16%       0       0       0       1      3   70889    239       0      0     0s
 11%       0       0       0       0      0   45850    568       0      0     0s
  8%       0       0       0       0      0   35512    312       0      0     4
 18%       0       0       0       1      2   80044    685       0      0     4
I would say, that's also far beyond from being overloaded.

I have no further clue atm.
Using the most recent Veeam B&R in many different environments now and counting!
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Delo123 »

Did you try NDB but use your Veeam Server as proxy instead your virtual proxy? (and make sure your proxy can also handle multiple tasks?)
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Delo123 wrote:Did you try NDB but use your Veeam Server as proxy instead your virtual proxy? (and make sure your proxy can also handle multiple tasks?)
Will do that right now...
Using the most recent Veeam B&R in many different environments now and counting!
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Delo123 »

:) Another thing which came to my mind, which nics are you using in your vm's? Vmxnet3 give best performance....
Didi7
Veteran
Posts: 491
Liked: 61 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Delo123 wrote::) Another thing which came to my mind, which nics are you using in your vm's? Vmxnet3 give best performance....
I am using VMXNET3 in the virtual VMware Backup Proxies and the OS is Windows Server 2012 R2 with 4x vCPU and 4GB of RAM.
Using the most recent Veeam B&R in many different environments now and counting!
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Delo123 »

Hmm sounds perfect :) Let's get this working and grab a weekend beer :)
Post Reply

Who is online

Users browsing this forum: Google [Bot], Michael.L and 124 guests