Comprehensive data protection for all workloads
Didi7
Veteran
Posts: 511
Liked: 68 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Delo123 wrote:Hmm sounds perfect :) Let's get this working and grab a weekend beer :)
Sounds like a good idea!

The disk backup job results with NBD will follow soon and then I will quit as well ;)
Using the most recent Veeam B&R in many different environments now and counting!
Didi7
Veteran
Posts: 511
Liked: 68 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Here are the results of the same disk backup job (FULL) using NBD transport mode through the VBR backup server itself by deactivating the virtual VMware Backup Proxies ...

Duration: 0:23:31
Processing rate: 48MB/s
Load: Source 92% > Proxy 6% > Network 9% > Target 9%
Hard disk 1 (50.0 GB) 14.5 GB read at 29 MB/s [CBT]
Hard disk 2 (50.0 GB) 45.3 GB read at 37 MB/s [CBT]

Clearly NBD (1GBit/s) is the limiting factor here, though reporting Source as bottleneck.
Using the most recent Veeam B&R in many different environments now and counting!
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Delo123 »

Hmm ok... Let's think about it again and have fresh start monday!
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Delo123 »

Maybe one more thing... Which setting did you use in the backup job? Lan->local->large... Maybe switch this to change blocksize and do one last check with proxies enabled again? We get better performance with local even if we actually use lan
Didi7
Veteran
Posts: 511
Liked: 68 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Delo123 wrote:Hmm ok... Let's think about it again and have fresh start monday!
One last thing that came into my mind is the fact, that our NetApp volumes are using deduplication and the deduplication rate is very high (60% savings on volumes, where the OS of VM's are located). Should that make a significant difference?

Probably, I will look, if I can move some VM's and create another NetApp volume on these SAS-disk-aggregate, which doesn't use NetApp Storage Efficiency.

Thanks for your support, really appreciated.

Maybe someone with NetApp FAS2040 experience can put his 2 cents into this thread here ;)

Enjoy your weekend.
Using the most recent Veeam B&R in many different environments now and counting!
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Delo123 »

I would bet a six-pack you will get better numbers with clean non deduped luns :) but i personally never played with such a netapp, so let's see when you tested :) have a nice weekend too!
kryptoem
Influencer
Posts: 11
Liked: 5 times
Joined: Jan 28, 2016 6:36 am
Full Name: Etienne Munnich
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by kryptoem »

I've seen similar performance issues with V9 - I've had to force the proxies to run Virtual Appliance mode with failover to network. vSphere environment is 6 (latest build). I have seen less of a VSS stun but backup performance appears to be much slower than before. (V8).
jveerd1
Service Provider
Posts: 53
Liked: 10 times
Joined: Mar 12, 2013 9:12 am
Full Name: Joeri van Eerd
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by jveerd1 »

You should look at the Disk Utilization counters when running a sysstat -x. If Disk Utilization is above 70% during the backup job, you have hit your bottleneck. Please contact NetApp support or your supplier in this case, because implementation or sizing might not be correct. If Disk Utilization is below 50%, you can rest assured your NetApp is probably not the bottleneck and you should investigate the FC connections to your ESXi servers. Veeam support might be able to help, otherwise contact VMware support.

Veeam bottleneck statistics are accurate, you can trust on them when troubleshooting.
Didi7
Veteran
Posts: 511
Liked: 68 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

kryptoem wrote:I've seen similar performance issues with V9 - I've had to force the proxies to run Virtual Appliance mode with failover to network. vSphere environment is 6 (latest build). I have seen less of a VSS stun but backup performance appears to be much slower than before. (V8).
Hello kryptoem, what you mean by 'with failover to network' ? I am using vSphere 5.5 instead here.
Using the most recent Veeam B&R in many different environments now and counting!
Didi7
Veteran
Posts: 511
Liked: 68 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

jveerd1 wrote:You should look at the Disk Utilization counters when running a sysstat -x. If Disk Utilization is above 70% during the backup job, you have hit your bottleneck. Please contact NetApp support or your supplier in this case, because implementation or sizing might not be correct. If Disk Utilization is below 50%, you can rest assured your NetApp is probably not the bottleneck and you should investigate the FC connections to your ESXi servers. Veeam support might be able to help, otherwise contact VMware support.

Veeam bottleneck statistics are accurate, you can trust on them when troubleshooting.
During Veeam Backup disk job with NBD transport mode ...

Code: Select all

fas2040ctrl2> sysstat -x
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out
 19%      0      0      0     422       1      3   70753    389       0      0     4     98%    2%  T    39%       1    421      0     211  67132       0      0
 19%      0      0      0      78       1      4   71616    401       0      0     1     97%    2%  T    37%       2     76      0     192  65377       0      0
 17%      0      0      0     155       1      2   57792    909       0      0     1     96%    7%  T    48%       1    154      0     654  59059       0      0
 18%      0      0      0      69       0      0   69846   1027       0      0     2     99%    6%  T    44%       1     68      0     237  64031       0      0
 19%      0      0      0     105       0      0   67187   2929       0      0     2     98%   14%  T    42%       1    104      0     453  64932       0      0
 17%      0      0      0     136       1      2   63350    861       0      0     1     98%    5%  T    42%       1    135      0     458  62221       0      0
 18%      0      0      0     243       0      0   66549    871       0      0     1     98%    5%  T    37%      28    215      0     902  64342       0      0
 19%      0      0      0     246       6     38   64064   1359       0      0     1     99%    8%  Tf   44%       2    244      0    1174  56442       0      0
Disk utilization never hits 50% !!! I am quite sure, that FC-connnection (Point-to-Point) is correctly configured, but I will check that again, but first I will check Virtual Appliance Mode and a backup from a none deduplicated NetApp-volume just for comparison.
Using the most recent Veeam B&R in many different environments now and counting!
Didi7
Veteran
Posts: 511
Liked: 68 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

(using HOTADD transport mode)

Code: Select all

fas2040ctrl2> sysstat -x
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out
 11%      0      0      0     981       1      3    4156  16097       0      0     4s    99%   70%  Ff   16%      22    959      0    8438   4047       0      0
  3%      0      0      0      35       0      0     505   4274       0      0    16s    97%   21%  T     4%       1     34      0     508    149       0      0
  2%      0      0      0     101       1      2     379    840       0      0    16s    99%    5%  T     2%       8     93      0     458    842       0      0
  2%      0      0      0      69       0      4     622   1257       0      0    16s   100%    7%  Tv    3%       1     68      0     351      4       0      0
  1%      0      0      0      69       0      2     401    546       0      0    16s    99%    4%  T     3%      22     47      0     170    789       0      0
  2%      0      0      0      43       0      0     375    564       0      0    16s    98%    3%  T     2%       1     42      0     231     24       0      0
  4%      0      0      0     199       0      1    1981   2245       0      0    16s    96%   12%  T     9%       8    191      0     845   2047       0      0
 25%      0      0      0     869       0      1   33883  19848       0      0     0s   100%   36%  3f   49%       7    862      0   20110  34134       0      0
 31%      0      0      0    1053       0      4   27222  33695       0      0     0s   100%   66%  3    55%       1   1052      0   28166  25566       0      0
 25%      0      0      0     838       1      3   24135  25648       0      0     0s   100%   59%  Hn   45%      22    816      0   17770  24590       0      0
 25%      0      0      0    1720       0      0   11289  28224       0      0     0s    98%   55%  H    37%       1   1719      0   21928  10135       0      0
 18%      0      0      0    1725       0      1    7116  17145       0      0    29     99%   65%  Ff   26%       7   1718      0   12279   6868       0      0
  8%      0      0      0    1033       0      4    4738   7715       0      0    11s    94%   33%  Tf   15%       1   1032      0    1530   4034       0      0
 18%      0      0      0    3391       0      3   13144   9031       0      0     0s   100%   37%  Ff   24%      22   3369      0   10540  13523       0      0
 16%      0      0      0    2449       0      0    9823  13711       0      0     1s   100%   63%  Ff   33%       1   2448      0    8816   9311       0      0
 16%      0      0      0    2408       1      2    9302  11281       0      0     1s   100%   44%  F    23%       7   2401      0   12410   9461       0      0
  8%      0      0      0     358       7     14    1386  10699       0      0    29     99%   44%  F    10%       1    357      0    3590    960       0      0
 20%      0      0      0    3835       0      2   15487   8302       0      0     0s   100%   32%  2f   27%      22   3813      0    9457  15569       0      0
 22%      0      0      0    3527       6     40   13893  18396       0      0     1s   100%   71%  F    30%      55   3472      0   12985  13301       0      0
 32%      0      0      0    1442       7     64    5443  38409       0      0     0s    99%   69%  F    28%       7   1435      0   29727   4522       0      0
 37%      0      0      0    3071       0      4   40014  31503       0      0     2s    99%   67%  F    39%       1   3070      0   23948  36909       0      0
  9%      0      0      0     519       0      3    6146   7478       0      0     1s    99%   35%  Tf   13%      22    497      0    6148   6536       0      0
 27%      0      0      0     938       0      1   35554  22679       0      0     0s   100%   43%  3f   47%       7    931      0   22330  35776       0      0
 20%      0      0      0     598       1      4   17691  19709       0      0     0s    99%   56%  Ff   39%       1    597      0   17154  17057       0      0
 28%      0      0      0    1657       1      3   17192  28666       0      0     0s    99%   64%  Hf   45%      23   1634      0   20644  16835       0      0
 19%      0      0      0    3158       0      0   12600  15931       0      0    32     99%   63%  H    27%       2   3155      0   12758  11766       0      0
 15%      0      0      0    1060       0      1    4489  16528       0      0     0s    98%   58%  Hf   20%       7   1053      0    8994   4377       0      0
 20%      0      0      0    3879       0     29   15451  10489       0      0     0s   100%   39%  Hf   23%      16   3863      0    9788  15016       0      0
 11%      0      0      0     864       0      2    3378  16032       0      0     3s   100%   69%  F    17%      22    842      0    8456   3576       0      0
Disk utilization only once surpassed 50%, but most of the time disk utilization was below 50%.

Overall proceccing rate was: 74MB/s
Parallel processing was enabled and one proxy took the system drive, the other proxy took the data drive, but still to slow in my opinion.

Next I will try with none deduplicated NetApp volume (Storage Efficiency = disabled).
Using the most recent Veeam B&R in many different environments now and counting!
Didi7
Veteran
Posts: 511
Liked: 68 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Ok, here are the results of the disk backup job using Parallel Processing and HOTADD transport mode on NetApp volumes without Storage Efficiency enabled (deduplication off) ...

Processing Rate: 88MB/s
Load: Source 88% > Proxy 36% > Network 13% > Target 11%
Hard disk 1 (50.0 GB) 14.5 GB read at 69 MB/s [CBT]
Hard disk 2 (50.0 GB) 45.3 GB read at 67 MB/s [CBT]

This is only slightly faster than I expected and not really persuading me to stop using 'Storage Efficiency' on NetApp volumes. Obviously, there is still something else, that lets transfer speed drop. The NetApp has disk utilization below 50%, so nothing to worry about. What's left? Maybe FC connection, which I am sure is correct (will check that as well) or maybe the new VBR 9.x version. Maybe I will try with VBR 8.x as well.
Using the most recent Veeam B&R in many different environments now and counting!
Didi7
Veteran
Posts: 511
Liked: 68 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

kryptoem wrote:I've seen similar performance issues with V9 - I've had to force the proxies to run Virtual Appliance mode with failover to network. vSphere environment is 6 (latest build). I have seen less of a VSS stun but backup performance appears to be much slower than before. (V8).
First of all, we use vSphere 5.5 and the proxies automatically selected the transport mode and would failover to network. I forced the Proxies to use Virtual Appliance mode with failover to network now and re-started a full backup job. Results soon to be posted here ...
Using the most recent Veeam B&R in many different environments now and counting!
Didi7
Veteran
Posts: 511
Liked: 68 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Hello kryptoem, processing rate didn't change at all. Same transfer speed, as with no bound to Virtual Appliance Mode for the virtual proxies.

Checking FC connection now.
Using the most recent Veeam B&R in many different environments now and counting!
Didi7
Veteran
Posts: 511
Liked: 68 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 »

Ok, FC connections correctly configured. Pimping the Proxies to 8 vCPU's and 8GB of RAM wasn't changing anything.

Parallel Processing disabled.

Processing rate: 76MB/s
Source 88% > Proxy 20% > Network 15% > Target 11%
Hard disk 1 (50.0 GB) 14.5 GB read at 86 MB/s [CBT]
Hard disk 2 (50.0 GB) 45.3 GB read at 73 MB/s [CBT]

Disk Backup was done from a none deduplicated NetApp volume. Whatever I do, I cannot reach the FTP-transfer speed of 100MB/s per Harddisk.

sysstat -x reports 27% disk utilization at max during VBR backup.

The last option I have is switching to VBR 8.x and see, if that's pimping the transfer rates. Will do that tomorrow with a fresh blank harddisk.

Anyone has any more clue?
Using the most recent Veeam B&R in many different environments now and counting!
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Delo123 »

A lot of data is skipped on Harddisk 1, maybe you could try to fully fill a harddisk or decrease capacity to 15gb to test, since skipping data results in "lower" throughput rate...
Didi7
Veteran
Posts: 511
Liked: 68 times
Joined: Oct 17, 2014 8:09 am
Location: Hypervisor
Contact:

Re: Slow performance using HOTADD mode in VBR 9.x ...

Post by Didi7 » 1 person likes this post

Hello again,

in the meantime VBR 9.0.0.1491 has been released and I have contacted a NetApp Systems Engineer to further investigate, why throughput on a NetApp FAS2040 is limited compared to other storage manufacturer, like e.g. HP with its MSA2040. The NetApp System Engineer told me, that one process on the NetApp cannot fully use the complete bandwith, so that other processes still continue to have their bandwith as well.

In other words, checkmarking 'Enable parallel processing' in general options combined with 'Limit maximum concurrent tasks to' more than '1' for a Backup Repository should be used in environments, where NetApp FAS2040 storage is used, to get more than those 75MB/s throughput rate in total.

Our new VBR 9.x backup server will go intro production next week and I will check overall performance with 2 backup tasks (capable of attaching more than 1 vmdk to one of two available virtual VMware Backup Proxy simultaneously) running against the same storage unit.

I plan to post the results here again.
Using the most recent Veeam B&R in many different environments now and counting!
Post Reply

Who is online

Users browsing this forum: 00ricbjo, Baidu [Spider], d.artzen, Google [Bot], ottl05, ThomasIKL51 and 165 guests