Comprehensive data protection for all workloads
Post Reply
atmarx
Novice
Posts: 8
Liked: never
Joined: May 15, 2019 12:08 am
Full Name: andrew
Contact:

Speed sanity check

Post by atmarx »

Hi folks --

I'm new to Veeam, and am trying it out to see if it's a better fit for my 100TB of data than (gulp) DPM.

I've got a Dell R730 with good procs and 96GB of RAM tied to a Dell MD3460 disk array via 4x 12Gbps SAS connections (MPIO configured as RRws, 2 active). I've got this server wired directly to the file server via two bonded 10Gb links (Intel 520 card in each). The storage server is connected to another MD3460 with the same setup (except 160GB of RAM on that host). Each of the arrays is 210TB made up of 58 8TB NL SAS drives in RAID 10.

I got Veeam set up with defaults, and started the job, and the speeds fluctuated a bit but eventually settled around 250MBps. Given that I know the arrays can pump data close to 8-900MBps and the connection should be able to put through at least 1GBps, I'm stuck feeling like 250MBps is... not optimal. I tried compression from dedupe compatible to max, but roughly the same throughput.

I've tried reworking the stripe size on the backup storage array (originally at 256kb, tried 512kb) to no effect (well, sort of -- now I've got a 100 hour background initialization job :roll: ) It was worth it, because the array was originally configured without drawer loss protection, so at least I got that out of it 8)

Given the storage specs, am I missing some magical combination that'll get me closer to 500MBps? The two VMs have attached VHDXs totalling 40TB and 80TB -- should I be using the 8MB Veeam blocks instead of the default 1MB?

I also broke the team to see if that was the issue, but no improvement. The only thing I could get it to do was maim the performance if I set the bond to switch independent dynamic (dropped to around 200MBps). I was also able to kill performance by turning off write caching on the array (dropped to 40MBps). I tuned the network cards for what I would normally do for ISCSI connections, but if there was an improvement, it was lost in the noise.

My goal is to rework the storage into a proper failover cluster, but I can't/won't make any changes until I have 2 solid backups. Months of fighting with DPM have left me with nothing but failing backups (or worse - jobs that didn't fail but should have since they wouldn't restore). If 250MBps is the best I can get, but I'm left with working backups, then so be it.

I'm sure there were a few more things I tried that I'm forgetting now that I'm home and writing this out, but I'm open and thankful for any suggestions or pointers you can give.
HannesK
Product Manager
Posts: 14835
Liked: 3082 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Speed sanity check

Post by HannesK »

Hello,
and welcome to the forums.

Just to clarify: are you taking about MByte/s or MBit/s and which product are you using? Veeam Agent for Windows, or Backup & Replication with Hyper-V? (that questions might sound strange, but I have seen "unexpected" things in the past).

If you use Hyper-V with Backup & Replication: How many VMs do you backup per job? And how many disks do the VMs have (because there is one task per disk)? And how many concurrent tasks are configured?

Best regards,
Hannes
atmarx
Novice
Posts: 8
Liked: never
Joined: May 15, 2019 12:08 am
Full Name: andrew
Contact:

Re: Speed sanity check

Post by atmarx »

Hi Hannes --

I'm using the community edition of B&R. I'm using MB for MByte and Mb for MBit. The job is currently backing up a single VM (I need different retention policies on each -- 30 days for operational data, 60 days for research data). The operational VM has 6 VHDXs attached, while the research VM has ~80 VHDXs. Once the operational VM is backed up, I'll create the other job which I plan to chain to the first job to avoid overlap. I used all of the defaults -- the host is currently set to allow 4 concurrent tasks.

I forgot to post the bottleneck stats last night. I'm seeing 36/35/40/60, so B&R believes the target is the weaker link.

One thing I noticed is if I take the connection between the two down (say, for tinkering with NIC settings), when it reconnects, it has a moment of running at around 500MBps before dropping back down to 250MBps. If I had to guess, when the connection is lost, the RAID array has a chance to flush the write cache. When the connect is restored, it's able to fill it back up for that first 20-30 seconds or so before it's back to being disk limited.

Looking at other folks speeds, I should probably be happy with the 250MBps I'm getting -- I just want to see if it sounds appropriate for the setup I've got or if there's something I'm overlooking.
HannesK
Product Manager
Posts: 14835
Liked: 3082 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Speed sanity check

Post by HannesK »

Hi,
in general, 250MByte/s for 1-2 VMs sounds good for me. Bottleneck analysis also looks good for me.

As you have only two VMs in two jobs, I see no other way of tuning except trying more tasks as mentioned above. But I don't believe that it will become much faster because you write a single stream which is probably the limiting factor.

Best regards,
Hannes
atmarx
Novice
Posts: 8
Liked: never
Joined: May 15, 2019 12:08 am
Full Name: andrew
Contact:

Re: Speed sanity check

Post by atmarx »

Thanks for that -- I feel better. I'm raising the issue on the Dell PowerVault side to see if those throughputs look right, but it's nice to know Veeam is working as expected. So far, loving it! If it's able to keep running consistently without throwing a hissy fit every week or two like DPM did, then I'm golden.
Post Reply

Who is online

Users browsing this forum: Google [Bot], Majestic-12 [Bot] and 133 guests