Comprehensive data protection for all workloads
Post Reply
veeeammeupscotty
Enthusiast
Posts: 33
Liked: 2 times
Joined: May 05, 2017 3:06 pm
Full Name: JP
Contact:

Hardened repository performance troubleshooting

Post by veeeammeupscotty »

I've set up an Ubuntu hardened repository using XFS and trying to figure out why I'm getting slow throughput with high busy on both source and target despite both storages not even coming close to tapping available I/O. The network/storage fabric is all 25Gbe (including ESXi management). The job is using a virtual proxy separate from the Veeam server, but I also tested with the proxy running locally on the Veeam server with similar results. I don't see it listed in the limitations, but is it true that hardened repositories only support NBD transport mode? If so, are there any other methods to test performance between proxy and repository since a tool like DiskSpd won't work?

Code: Select all

Load: Source 91% > Proxy 43% > Network 67% > Target 99%  
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Hardened repository performance troubleshooting

Post by HannesK »

Hello,
Hardened Repository has the same performance like a normal Linux Repository.
I'm getting slow throughput
what speed and what hardware do you have? How many disks / VMs do you back up in parallel? What are the "raw performance" values independent from Veeam (what I/O performance did you see with your preferred I/O test tool and what did you configure for that test tool? did you do simple "copy" tests?
but is it true that hardened repositories only support NBD transport mode?
That's false :-) For the (Hardened) repository the backup mode is irrelevant.
test performance between proxy and repository since a tool like DiskSpd won't work?
the performance between proxy and repo seems to be okay... Network is as 67%. The connection can be tested for example with iperf. Diskspd or whatever you prefer is one disk IO test tool that would be an option, yes.

If I had one guess, then I would check wither per-machine backup chains are enabled

Best regards,
Hannes
soncscy
Veteran
Posts: 643
Liked: 312 times
Joined: Aug 04, 2019 2:57 pm
Full Name: Harvey
Contact:

Re: Hardened repository performance troubleshooting

Post by soncscy »

Why won't diskspd work? It has linux binaries:

https://github.com/microsoft/diskspd-for-linux

If you don't want to deal with that, the gold standard is fio: https://github.com/axboe/fio

Keep in mind, available IO is not always a reliable metric, and it absolutely depends on how that IO was counted. So maybe the IOPS rating is phenomenal, but the random IO is not so great.

Test with diskspd first and just see the performance you get as per the Veeam KB on it.
veeeammeupscotty
Enthusiast
Posts: 33
Liked: 2 times
Joined: May 05, 2017 3:06 pm
Full Name: JP
Contact:

Re: Hardened repository performance troubleshooting

Post by veeeammeupscotty »

HannesK wrote: Dec 17, 2021 6:37 am
That's false :-) For the (Hardened) repository the backup mode is irrelevant.
Why is it irrelevant? I tested in Virtual appliance and network mode. Virtual appliance mode was approximately 25% slower. Per-machine backup files is not enabled on the repository, but right now I've only been testing with a single VM in the backup job at at a time. I've tested with FIO locally on the repository and I'm getting about 1000 MB/s sequential write speeds. Disk speeds on the source are 2600 MB/s. Testing network with iPerf between the proxy and repository yields speeds of 5.6 Gbps (700 MB/s) from repository to proxy or 10.7 Gbps (1337 MB/s) from proxy to repository. Backup speeds are capping out between 75 MB/s and 100 MB/s with appliance/network mode respectively. The network bottleneck may be attributed to the fact that I'm not using jumbo frames yet, but I would think I should still be getting more than 100 MB/s max.
veeeammeupscotty
Enthusiast
Posts: 33
Liked: 2 times
Joined: May 05, 2017 3:06 pm
Full Name: JP
Contact:

Re: Hardened repository performance troubleshooting

Post by veeeammeupscotty »

soncscy wrote: Dec 17, 2021 7:57 pm Why won't diskspd work? It has linux binaries:
I mean testing from proxy to repository since hardened repositories cannot be mounted via NFS or SMB. I did do some testing with FIO as I posted above.
micoolpaul
Veeam Vanguard
Posts: 211
Liked: 107 times
Joined: Jun 29, 2015 9:21 am
Full Name: Michael Paul
Contact:

Re: Hardened repository performance troubleshooting

Post by micoolpaul »

Hi veeammeupscotty (LOVE THAT USERNAME)

Can you confirm the following:
- NIC speed of repository server (1/10/25/40/100Gbps)
- NIC speed of the proxy server, are you using the same VM in NBD and Hot-Add/Virtual Appliance Mode when you’re seeing the 25% performance difference?
- Have you got any storage/network IO control configured in your environment that could be impacting?
- The VM being used for hot-add, what type of NIC does it have? (E1000/VMXNET3)
- What is the speed of the switches between proxy and repository?

This should hopefully reveal any bottlenecks :)
-------------
Michael Paul
Veeam Legend | Veeam Certified Architect | Veeam Vanguard
veeeammeupscotty
Enthusiast
Posts: 33
Liked: 2 times
Joined: May 05, 2017 3:06 pm
Full Name: JP
Contact:

Re: Hardened repository performance troubleshooting

Post by veeeammeupscotty »

All NIC speeds are 25Gbps and using VMXNET3 adapters on both the Veeam server and Proxy. No IO control is configured. Switches are 25Gbps.

I found a post from 5 years ago which seems to suggest there's a VMware limitation that hard-caps nbd at 100MB/s and a Veeam employee even confirmed it at the time. If that's still the case, then I'm still confused as to why hot add performance is even worse than nbd.
lando_uk wrote: Dec 06, 2016 6:35 pm I was ultimately disappointed when moving to 10Gbe Management, after testing I confirmed that NBD mode only goes at about 100MB/s per VM max (some esxi related throttling), but when you have more VM's in a Job (all on the same host), combined they can achieve much faster speeds, going up to 500MB/s and beyond, ultimately beating hotadd timings when processing lots.
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Hardened repository performance troubleshooting

Post by HannesK »

Hello,
VMware limitation that hard-caps nbd at 100MB/s
I can remember a 40% limit that was never officially documented. That was ugly during 1Gbit/s times, but should be more or less irrelevant today. I also noticed in my lab some years ago, that I could do faster backups with 1Gbit/s management ports than 400Mbit/s. I believe that performance improvement happened around ESXi 6.7
why hot add performance is even worse than nbd
that's a rare situation, agree. But that's proxy speed. Not repository speed. That's why I wrote, that the backup mode is irrelevant for hardened repository. You would see the same with standard Linux repository or a Windows repository.
I've only been testing with a single VM
how many disks? the more disks, the better speed you should see.
proxy and repository yields speeds of 5.6 Gbps (700 MB/s)
that's the speed you should get with some parallel processing (multiple disks and multiple VMs with per-machine backup chains).

Best regards,
Hannes
Post Reply

Who is online

Users browsing this forum: Semrush [Bot] and 88 guests