Code: Select all
Load: Source 91% > Proxy 43% > Network 67% > Target 99%
Code: Select all
Load: Source 91% > Proxy 43% > Network 67% > Target 99%
what speed and what hardware do you have? How many disks / VMs do you back up in parallel? What are the "raw performance" values independent from Veeam (what I/O performance did you see with your preferred I/O test tool and what did you configure for that test tool? did you do simple "copy" tests?I'm getting slow throughput
That's false For the (Hardened) repository the backup mode is irrelevant.but is it true that hardened repositories only support NBD transport mode?
the performance between proxy and repo seems to be okay... Network is as 67%. The connection can be tested for example with iperf. Diskspd or whatever you prefer is one disk IO test tool that would be an option, yes.test performance between proxy and repository since a tool like DiskSpd won't work?
Why is it irrelevant? I tested in Virtual appliance and network mode. Virtual appliance mode was approximately 25% slower. Per-machine backup files is not enabled on the repository, but right now I've only been testing with a single VM in the backup job at at a time. I've tested with FIO locally on the repository and I'm getting about 1000 MB/s sequential write speeds. Disk speeds on the source are 2600 MB/s. Testing network with iPerf between the proxy and repository yields speeds of 5.6 Gbps (700 MB/s) from repository to proxy or 10.7 Gbps (1337 MB/s) from proxy to repository. Backup speeds are capping out between 75 MB/s and 100 MB/s with appliance/network mode respectively. The network bottleneck may be attributed to the fact that I'm not using jumbo frames yet, but I would think I should still be getting more than 100 MB/s max.
lando_uk wrote: ↑Dec 06, 2016 6:35 pm I was ultimately disappointed when moving to 10Gbe Management, after testing I confirmed that NBD mode only goes at about 100MB/s per VM max (some esxi related throttling), but when you have more VM's in a Job (all on the same host), combined they can achieve much faster speeds, going up to 500MB/s and beyond, ultimately beating hotadd timings when processing lots.
I can remember a 40% limit that was never officially documented. That was ugly during 1Gbit/s times, but should be more or less irrelevant today. I also noticed in my lab some years ago, that I could do faster backups with 1Gbit/s management ports than 400Mbit/s. I believe that performance improvement happened around ESXi 6.7VMware limitation that hard-caps nbd at 100MB/s
that's a rare situation, agree. But that's proxy speed. Not repository speed. That's why I wrote, that the backup mode is irrelevant for hardened repository. You would see the same with standard Linux repository or a Windows repository.why hot add performance is even worse than nbd
how many disks? the more disks, the better speed you should see.I've only been testing with a single VM
that's the speed you should get with some parallel processing (multiple disks and multiple VMs with per-machine backup chains).proxy and repository yields speeds of 5.6 Gbps (700 MB/s)
Users browsing this forum: Google [Bot] and 128 guests