Sounds like a good idea!Delo123 wrote:Hmm sounds perfect Let's get this working and grab a weekend beer
The disk backup job results with NBD will follow soon and then I will quit as well
Sounds like a good idea!Delo123 wrote:Hmm sounds perfect Let's get this working and grab a weekend beer
One last thing that came into my mind is the fact, that our NetApp volumes are using deduplication and the deduplication rate is very high (60% savings on volumes, where the OS of VM's are located). Should that make a significant difference?Delo123 wrote:Hmm ok... Let's think about it again and have fresh start monday!
Hello kryptoem, what you mean by 'with failover to network' ? I am using vSphere 5.5 instead here.kryptoem wrote:I've seen similar performance issues with V9 - I've had to force the proxies to run Virtual Appliance mode with failover to network. vSphere environment is 6 (latest build). I have seen less of a VSS stun but backup performance appears to be much slower than before. (V8).
During Veeam Backup disk job with NBD transport mode ...jveerd1 wrote:You should look at the Disk Utilization counters when running a sysstat -x. If Disk Utilization is above 70% during the backup job, you have hit your bottleneck. Please contact NetApp support or your supplier in this case, because implementation or sizing might not be correct. If Disk Utilization is below 50%, you can rest assured your NetApp is probably not the bottleneck and you should investigate the FC connections to your ESXi servers. Veeam support might be able to help, otherwise contact VMware support.
Veeam bottleneck statistics are accurate, you can trust on them when troubleshooting.
Code: Select all
fas2040ctrl2> sysstat -x
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk OTHER FCP iSCSI FCP kB/s iSCSI kB/s
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk OTHER FCP iSCSI FCP kB/s iSCSI kB/s
in out read write read write age hit time ty util in out in out
19% 0 0 0 422 1 3 70753 389 0 0 4 98% 2% T 39% 1 421 0 211 67132 0 0
19% 0 0 0 78 1 4 71616 401 0 0 1 97% 2% T 37% 2 76 0 192 65377 0 0
17% 0 0 0 155 1 2 57792 909 0 0 1 96% 7% T 48% 1 154 0 654 59059 0 0
18% 0 0 0 69 0 0 69846 1027 0 0 2 99% 6% T 44% 1 68 0 237 64031 0 0
19% 0 0 0 105 0 0 67187 2929 0 0 2 98% 14% T 42% 1 104 0 453 64932 0 0
17% 0 0 0 136 1 2 63350 861 0 0 1 98% 5% T 42% 1 135 0 458 62221 0 0
18% 0 0 0 243 0 0 66549 871 0 0 1 98% 5% T 37% 28 215 0 902 64342 0 0
19% 0 0 0 246 6 38 64064 1359 0 0 1 99% 8% Tf 44% 2 244 0 1174 56442 0 0
Code: Select all
fas2040ctrl2> sysstat -x
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk OTHER FCP iSCSI FCP kB/s iSCSI kB/s
in out read write read write age hit time ty util in out in out
11% 0 0 0 981 1 3 4156 16097 0 0 4s 99% 70% Ff 16% 22 959 0 8438 4047 0 0
3% 0 0 0 35 0 0 505 4274 0 0 16s 97% 21% T 4% 1 34 0 508 149 0 0
2% 0 0 0 101 1 2 379 840 0 0 16s 99% 5% T 2% 8 93 0 458 842 0 0
2% 0 0 0 69 0 4 622 1257 0 0 16s 100% 7% Tv 3% 1 68 0 351 4 0 0
1% 0 0 0 69 0 2 401 546 0 0 16s 99% 4% T 3% 22 47 0 170 789 0 0
2% 0 0 0 43 0 0 375 564 0 0 16s 98% 3% T 2% 1 42 0 231 24 0 0
4% 0 0 0 199 0 1 1981 2245 0 0 16s 96% 12% T 9% 8 191 0 845 2047 0 0
25% 0 0 0 869 0 1 33883 19848 0 0 0s 100% 36% 3f 49% 7 862 0 20110 34134 0 0
31% 0 0 0 1053 0 4 27222 33695 0 0 0s 100% 66% 3 55% 1 1052 0 28166 25566 0 0
25% 0 0 0 838 1 3 24135 25648 0 0 0s 100% 59% Hn 45% 22 816 0 17770 24590 0 0
25% 0 0 0 1720 0 0 11289 28224 0 0 0s 98% 55% H 37% 1 1719 0 21928 10135 0 0
18% 0 0 0 1725 0 1 7116 17145 0 0 29 99% 65% Ff 26% 7 1718 0 12279 6868 0 0
8% 0 0 0 1033 0 4 4738 7715 0 0 11s 94% 33% Tf 15% 1 1032 0 1530 4034 0 0
18% 0 0 0 3391 0 3 13144 9031 0 0 0s 100% 37% Ff 24% 22 3369 0 10540 13523 0 0
16% 0 0 0 2449 0 0 9823 13711 0 0 1s 100% 63% Ff 33% 1 2448 0 8816 9311 0 0
16% 0 0 0 2408 1 2 9302 11281 0 0 1s 100% 44% F 23% 7 2401 0 12410 9461 0 0
8% 0 0 0 358 7 14 1386 10699 0 0 29 99% 44% F 10% 1 357 0 3590 960 0 0
20% 0 0 0 3835 0 2 15487 8302 0 0 0s 100% 32% 2f 27% 22 3813 0 9457 15569 0 0
22% 0 0 0 3527 6 40 13893 18396 0 0 1s 100% 71% F 30% 55 3472 0 12985 13301 0 0
32% 0 0 0 1442 7 64 5443 38409 0 0 0s 99% 69% F 28% 7 1435 0 29727 4522 0 0
37% 0 0 0 3071 0 4 40014 31503 0 0 2s 99% 67% F 39% 1 3070 0 23948 36909 0 0
9% 0 0 0 519 0 3 6146 7478 0 0 1s 99% 35% Tf 13% 22 497 0 6148 6536 0 0
27% 0 0 0 938 0 1 35554 22679 0 0 0s 100% 43% 3f 47% 7 931 0 22330 35776 0 0
20% 0 0 0 598 1 4 17691 19709 0 0 0s 99% 56% Ff 39% 1 597 0 17154 17057 0 0
28% 0 0 0 1657 1 3 17192 28666 0 0 0s 99% 64% Hf 45% 23 1634 0 20644 16835 0 0
19% 0 0 0 3158 0 0 12600 15931 0 0 32 99% 63% H 27% 2 3155 0 12758 11766 0 0
15% 0 0 0 1060 0 1 4489 16528 0 0 0s 98% 58% Hf 20% 7 1053 0 8994 4377 0 0
20% 0 0 0 3879 0 29 15451 10489 0 0 0s 100% 39% Hf 23% 16 3863 0 9788 15016 0 0
11% 0 0 0 864 0 2 3378 16032 0 0 3s 100% 69% F 17% 22 842 0 8456 3576 0 0
First of all, we use vSphere 5.5 and the proxies automatically selected the transport mode and would failover to network. I forced the Proxies to use Virtual Appliance mode with failover to network now and re-started a full backup job. Results soon to be posted here ...kryptoem wrote:I've seen similar performance issues with V9 - I've had to force the proxies to run Virtual Appliance mode with failover to network. vSphere environment is 6 (latest build). I have seen less of a VSS stun but backup performance appears to be much slower than before. (V8).
Users browsing this forum: Bing [Bot], Semrush [Bot] and 131 guests