Opened case 04717852 two weeks ago and wanted to see if anyone has experienced this.
Physical ubuntu 20.04 repo with 256GB RAM, 24 processors, qty 12 of 12Gbps multipathed disks, software RAID10, read speeds of ~500MB/s.
Destination server physical ubuntu 18.04 on a SAN with write speeds of ~800MB/s.
Started linux flr session with destination server as the flr helper appliance. Use gui to restore 6.7TB. Restore speed maxes at 45MB/s and will take 45+ hours to restore, even though an active full of this server takes less than 12 hours.
I knew it was too slow, so I started a second flr session and it also maxed at 45MB/s, but iotop on my repo was now showing 90MB/s reads. Started a couple more flr's and the aggregate restore speed kept increasing, but saw CPU utilization going high on the Veeam server so went a different route.
I saw that each flr session made a mount point on the destination server, so I tried a few methods of restoring the data from command line on the destination server; cp, rsync, rclone, etc. Each maxed out at 45MB/s but there was 0 CPU impact on the Veeam BNR since it's now removed from the restore process.
Example:
Code: Select all
sudo rclone copy -v --stats 60s --transfers 1 --ignore-existing /tmp/Veeam.Mount.FS.8ccb301b-441b-4955-8a5b-43774ef01669/FileLevelBackup_0/ /destinationfolder
I then started a total of 10 flr sessions and ran the rclone command 10x on the destination server, each using a different Veeam mount point from each of the flr sessions. Each rclone is still capped at 45MB/s, but iotop on the repo is now showing 450MBs read speed and iftop is showing ~3.5Gbps throughput. This seems to clearly show that the repo/destination hardware/network are not the bottleneck and that the Veeam linux agent seems to be putting a cap on the transfer speeds. With the 10 concurrent jobs, the restore finished in ~5 hours.
I also tried using several different linux boxes as the helper appliance and throughput results were identical. Tried increasing priority of the Veeam processes on the repo/destination with the nice command but that had no effect on the individual transfer speed.
Is this a known issue? Any reg keys to unleash the power of the flr agent so that it uses all available resources? The workaround of running 10 flr's is very manual and prone to error, so would really like to be able to just run this command once and have it run at full speed. Thank you!