Comprehensive data protection for all workloads
Post Reply
13Hemi
Novice
Posts: 7
Liked: 2 times
Joined: Jul 13, 2023 6:46 pm
Full Name: Jason Stewardson
Contact:

Slow Transfer Speed on New Server

Post by 13Hemi »

I've recently moved my backup jobs over to a new Linux repository server. I was expecting to see a speed increase as the old repo server was only 1GB vs 10GB in the new one, however I'm not seeing the speeds I would expect. My setup is as follows:
  • Linux repository with 10GB networking
  • VMware hosts with 10GB networking
  • Veeam B&R server with 1GB networking (for job management only)
  • Two virtual VMware proxies on Windows Server 2016 with 10GB networking, 8GB of RAM and 8 vCPUs
  • Transport mode for each proxy is set to automatic with 8 max concurrent tasks
  • Backup jobs are spread out over the evening to not overlap/overload the proxies
    • For these jobs, everything is located in the same DC on the same vLAN and switches
    I've ran an iperf test from the proxy to the repository and from a couple VMs to the repository and I'm seeing about 5Gbits/sec. Of course, I'm not expecting this kind of speed, however the jobs are only showing a processing rate of anywhere from 40-80 MB/s. Bottleneck summary on mostly all of them are showing 0% for all sources except the proxy which is showing only 10%.

    Am I wrong in expecting a higher speed than what I'm getting? Is there anything I can do to improve the backup speeds?
    MarkBoothmaa
    Veeam Legend
    Posts: 194
    Liked: 54 times
    Joined: Mar 22, 2017 11:10 am
    Full Name: Mark Boothman
    Location: Darlington, United Kingdom
    Contact:

    Re: Slow Transfer Speed on New Server

    Post by MarkBoothmaa »

    The sizing for your proxies looks a little wrong to be fair - I'd say bump the RAM to 16GB in the first instance.
    'Memory: 2 GB RAM plus 500 MB for each concurrent task. The actual size of memory required may be larger and depends on the amount of data to back up, machine configuration, and job settings. Using faster memory (DDR3/DDR4) improves data processing performance.'
    Does it show you using Hot Add or Network Mode in the job stats?
    13Hemi
    Novice
    Posts: 7
    Liked: 2 times
    Joined: Jul 13, 2023 6:46 pm
    Full Name: Jason Stewardson
    Contact:

    Re: Slow Transfer Speed on New Server

    Post by 13Hemi »

    Hi Mark,

    I've increased the RAM on those servers to 16GB each, hopefully that helps a bit. As for the job stats, it's showing that it's using Hot Add.
    galbitz
    Enthusiast
    Posts: 42
    Liked: 5 times
    Joined: May 17, 2018 2:22 pm
    Full Name: grant albitz
    Contact:

    Re: Slow Transfer Speed on New Server

    Post by galbitz »

    what are the source and destination disk configurations? Id usually expect that to be the bottleneck in most cases. The number of spindles and raid level for both source and target is a big factor.
    13Hemi
    Novice
    Posts: 7
    Liked: 2 times
    Joined: Jul 13, 2023 6:46 pm
    Full Name: Jason Stewardson
    Contact:

    Re: Slow Transfer Speed on New Server

    Post by 13Hemi »

    Source disks are connected to a SAN in an isolated 10GB network which is connected to each host on separate 10GB NICs. It's a hybrid pool (mostly spinning SAS disks at 10K RPM) in a raidz2 config with some flash caching. They are served to the VMware hosts via iSCSI. This SAN is being replaced in a few months by a new all-flash array.

    Destination is a physical repository host with a RAID6 xfs volume in Ubuntu. Disks are 14TB SAS drives at 7200RPM.
    bytewiseits
    Service Provider
    Posts: 54
    Liked: 31 times
    Joined: Nov 23, 2018 12:23 am
    Full Name: Dion Norman
    Contact:

    Re: Slow Transfer Speed on New Server

    Post by bytewiseits »

    If using software RAID on Ubuntu (MDADM etc), try re-enabling buffered access mode with a reg dword on the VBR server:
    DataMoverLegacyIOMode - set to '1' (V12), UseUnbufferedAccess - set to '0' for V11.

    From what we saw in our Ubuntu setups, the disk access 'improvments' they they made with V11 and further again with V12 just killed our disk storage performance where MDADM was used. Setting UseUnbufferedAccess in V11 and DataMoverLegacyIOMode in V12 changed rates from MB/s to GB/s on our jobs, particularly on our Ubuntu XFS repos.

    Definately worth a try - if it doesn't help you can just remove the keys to revert back.
    PetrM
    Veeam Software
    Posts: 3517
    Liked: 590 times
    Joined: Aug 28, 2013 8:23 am
    Full Name: Petr Makarov
    Location: Prague, Czech Republic
    Contact:

    Re: Slow Transfer Speed on New Server

    Post by PetrM »

    Hello,

    I would approach it in a different way: we still don't know where the "bottleneck" is. Based on:
    13Hemi wrote:Bottleneck summary on mostly all of them are showing 0% for all sources except the proxy which is showing only 10%
    I'd treat is as as technical issue so it would be best to open a support case and ask our engineers to find the "bottleneck" in the debug logs and explain why all components except proxy show 0 % load, this is strange. Please share a support case ID over here so that I can keep an eye on it.

    Thanks!
    13Hemi
    Novice
    Posts: 7
    Liked: 2 times
    Joined: Jul 13, 2023 6:46 pm
    Full Name: Jason Stewardson
    Contact:

    Re: Slow Transfer Speed on New Server

    Post by 13Hemi » 1 person likes this post

    Case #06302728 has been submitted and I've uploaded logs from some sample jobs.
    Post Reply

    Who is online

    Users browsing this forum: billy.tsang, Bing [Bot], Google [Bot], harbinger and 45 guests