Standalone backup agents for Linux, Mac, AIX & Solaris workloads on-premises or in the public cloud
Post Reply
gtelnet
Service Provider
Posts: 40
Liked: 16 times
Joined: Mar 28, 2020 3:50 pm
Full Name: GregT
Contact:

Each flr session reaches 45MB/s, 10 concurrent reach 450MB/s

Post by gtelnet »

Hi All,

Opened case 04717852 two weeks ago and wanted to see if anyone has experienced this.

Physical ubuntu 20.04 repo with 256GB RAM, 24 processors, qty 12 of 12Gbps multipathed disks, software RAID10, read speeds of ~500MB/s.

Destination server physical ubuntu 18.04 on a SAN with write speeds of ~800MB/s.

Started linux flr session with destination server as the flr helper appliance. Use gui to restore 6.7TB. Restore speed maxes at 45MB/s and will take 45+ hours to restore, even though an active full of this server takes less than 12 hours.

I knew it was too slow, so I started a second flr session and it also maxed at 45MB/s, but iotop on my repo was now showing 90MB/s reads. Started a couple more flr's and the aggregate restore speed kept increasing, but saw CPU utilization going high on the Veeam server so went a different route.

I saw that each flr session made a mount point on the destination server, so I tried a few methods of restoring the data from command line on the destination server; cp, rsync, rclone, etc. Each maxed out at 45MB/s but there was 0 CPU impact on the Veeam BNR since it's now removed from the restore process.

Example:

Code: Select all

sudo rclone copy -v --stats 60s --transfers 1 --ignore-existing /tmp/Veeam.Mount.FS.8ccb301b-441b-4955-8a5b-43774ef01669/FileLevelBackup_0/ /destinationfolder
Started another rclone using the same /tmp/Veeam.Mount.FS point and each of the two rclones dropped to ~22MB/s, showing they were evenly sharing the cap of 45MB/s.

I then started a total of 10 flr sessions and ran the rclone command 10x on the destination server, each using a different Veeam mount point from each of the flr sessions. Each rclone is still capped at 45MB/s, but iotop on the repo is now showing 450MBs read speed and iftop is showing ~3.5Gbps throughput. This seems to clearly show that the repo/destination hardware/network are not the bottleneck and that the Veeam linux agent seems to be putting a cap on the transfer speeds. With the 10 concurrent jobs, the restore finished in ~5 hours.

I also tried using several different linux boxes as the helper appliance and throughput results were identical. Tried increasing priority of the Veeam processes on the repo/destination with the nice command but that had no effect on the individual transfer speed.

Is this a known issue? Any reg keys to unleash the power of the flr agent so that it uses all available resources? The workaround of running 10 flr's is very manual and prone to error, so would really like to be able to just run this command once and have it run at full speed. Thank you!
aj_potc
Expert
Posts: 141
Liked: 35 times
Joined: Mar 17, 2018 12:43 pm
Contact:

Re: Each flr session reaches 45MB/s, 10 concurrent reach 450MB/s

Post by aj_potc » 1 person likes this post

What type of storage optimization setting are you using for this backup job?

For me, this made a big difference on restore speed.

In my situation, I had performance problems when doing a bare metal restore between a Linux-based repository and destination.

In my case, I would always seem to hit a limit of about 25 MB/sec during the restoration, even though the hardware on both ends and network between them could easily handle 100 MB/sec (the gigabit network was the limiting factor). No matter what target system I used, I always seemed to bump up against this 25 MB/sec limit for a particular backup job.

After a very long process of troubleshooting, a Veeam support agent suggested that I might be using the wrong block size for the backup. Apparently, by choosing the "WAN target" option under storage optimization, I was using 256KB blocks. By changing this to "Local target", the block size was increased to 1MB, and my restoration performance increased greatly. While it still didn't saturate the network connection, it could fill about 80%, which at least is acceptable.

Like with your problem, I never had issues with backup speed. That would fly along at line speed. Only restoration was troublesome.

I've never gotten a good explanation of this. While I can understand that a larger block size is better in certain situations, shouldn't even a small block size perform well when you've got a lot of hardware horsepower on both sides? Sure, the blocks may be smaller, but with fast CPU and disks, shouldn't it be possible to fill a gigabit pipe?

Sorry to add more questions to your topic -- it's just been something I'm wondering about related to slow restore speeds.
Gostev
Chief Product Officer
Posts: 31544
Liked: 6715 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Each flr session reaches 45MB/s, 10 concurrent reach 450MB/s

Post by Gostev » 1 person likes this post

FLR process is effectively random I/O with minimal queue depth... so it's perfectly normal that you need many of such streams to saturate full I/O capacity and bandwidth of an enterprise SAN.
gtelnet
Service Provider
Posts: 40
Liked: 16 times
Joined: Mar 28, 2020 3:50 pm
Full Name: GregT
Contact:

Re: Each flr session reaches 45MB/s, 10 concurrent reach 450MB/s

Post by gtelnet »

Thank you, Gostev. Any chance there's a setting to get the Veeam restore to start X many streams so that we don't have to do it with such a manual process? Or would that be a feature request? Thanks again!
gtelnet
Service Provider
Posts: 40
Liked: 16 times
Joined: Mar 28, 2020 3:50 pm
Full Name: GregT
Contact:

Re: Each flr session reaches 45MB/s, 10 concurrent reach 450MB/s

Post by gtelnet »

aj_potc wrote: Apr 12, 2021 11:07 pm What type of storage optimization setting are you using for this backup job?
Thanks for your input, AJ! We have all of our local backups set to Local Target as well.
aj_potc
Expert
Posts: 141
Liked: 35 times
Joined: Mar 17, 2018 12:43 pm
Contact:

Re: Each flr session reaches 45MB/s, 10 concurrent reach 450MB/s

Post by aj_potc » 1 person likes this post

Sorry I couldn't be more helpful. I suppose my issue was more related to block-level restores. (Perhaps FLR would be similarly affected.)

Like you, I have wondered why Veeam isn't more aggressive in using I/O and network resources during restoration processes. It would be great if this could be customized, as you suggested. Sometimes you want Veeam to be nice on I/O resources, but other times, you just want something to happen ASAP.

By the way, great idea to use rclone! The capabilities of that utility never cease to amaze me.
Post Reply

Who is online

Users browsing this forum: No registered users and 7 guests