Comprehensive data protection for all workloads
Post Reply
glamic26
Enthusiast
Posts: 27
Liked: 11 times
Joined: Apr 21, 2015 12:10 pm
Contact:

Best practices for high throughput FlashBlade NFS target

Post by glamic26 »

I'm looking for some advice from Veeam technical employees and users alike for how best to utilise a Pure FlashBlade device as a Veeam backup target. I am doing a POC and should really be seeing the source as the bottleneck each time if I have configured the FlashBlade targets and Veeam components in the most efficient way if I am understanding things correctly.

Firstly initial research and also testing is suggesting NFS is far faster than SMB. Therefore I have 3 physical Linux servers as Veeam repository servers. These each have 2x 10Gb connections in LACP, so a theoretical 20Gbps throughput for each repository. 3x 20Gbps = 60Gbps if I can load balance to maximum efficiency. I appreciate I'm never going to see this but would like to get as close as possible.

The FlashBlade system has 4x10Gb connections to each Fabric Module so a theoretical 80Gbps throughput. The Linux repository servers and the FlashBlade are on the same network.

I have 8 physical Windows proxy servers that are configured for Storage Snapshot integration with my 3PARs and Pure FlashArrays. These each have 2x 10Gb connections in LACP, so a theoretical 20Gbps throughput for each proxy (8x 20Gbps = 160Gbps. Albeit this is across two sites so 80Gbps per site.) These proxy servers have legs in the same network as the repository servers and FlashBlade.

I can give the FlashBlade multiple VIPs, and my understanding is the more VIPs the better for distributing the load across as many of the blades in the FlashBlade system as possible. The FlashBlade will obviously work best if I can get as many streams writing from Veeam as possible. However, as Veeam has no way of understanding a single NFS share shared across multiple repository servers (as far as I understand) then I have to directly mount the NFS shares to the Linux repositories, and I have to create multiple NFS shares (at least one per Linux repository). The problem this creates is SOBR does not seem to be the perfect construct for high throughput target systems because it decides where to put the backup files based on free space on the extents. Are there any plans for a better load balancing algorithm to be used on SOBR for high throughput capable targets? Otherwise when I add new NFS shares the SOBR will immediately only use those new extents because they have the most free space. Until that time what would be the best SOBR Placement Policy for the best performance in my scenario?

So the questions are: are my assumptions correct on having to directly mount the NFS shares on a per repository basis? And if so how many NFS shares and how many FlashBlade VIPs should I create to get as close as possible to 60Gbps throughput (7.5GB/s)? Or is this all completely wrong and is there a better way to get maximum throughput.

My latest test doing an active full on 38 VMs of various sizes all using storage snapshots, using the 8 proxies across two sites (3ms latency between sites), using the 3 NFS servers with a single NFS share mount on each, each going to a different FlashBlade VIP (3 VIPs in total).

Status Protected VMs Backup Type Start Time Duration Processing Rate (MB/Sec) Data Size (GB) Transferred (GB) Total Backup Size (GB)
Warning 38 of 38 Full 09/09/2019 10:47 01 days 07:21:00 1819.92 249489 88126.79 2316.51
Load: Source 47% > Proxy 63% > Network 83% > Target 13%
*the duration, data size and transferred size in this VeeamOne report are complete nonesense! It is 5.6TB total data size and duration was 49.5mins!

What do I do next to increase throughput? Currently getting 1.8GB/s. Bottleneck says Network but can't see how this can be the case as no adapters are maxing out across the proxies or repository servers.
nitramd
Veteran
Posts: 297
Liked: 85 times
Joined: Feb 16, 2017 8:05 pm
Contact:

Re: Best practices for high throughput FlashBlade NFS target

Post by nitramd »

One thing to do is to check the number of concurrent tasks your repos are set to. If applicable, you can increase the number concurrent tasks which should increase throughput.
HannesK
Product Manager
Posts: 14314
Liked: 2887 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Best practices for high throughput FlashBlade NFS target

Post by HannesK »

Hello,
does https://www.veeam.com/wp-ra-highly-avai ... store.html maybe answer some of your questions?
I have to directly mount the NFS shares to the Linux repositories
correct. Please note that V10 will allow NFS repositories without that Linux server (same like SMB repositories today). So you don't need SOBR at all in future.

your bottleneck analysis is okay. As long as none of the components is permanently at 95-99% it's okay.

If you don't need to scale to 60GBit in the near future, then I would recommend to wait for V10

Best regards,
Hannes
glamic26
Enthusiast
Posts: 27
Liked: 11 times
Joined: Apr 21, 2015 12:10 pm
Contact:

Re: Best practices for high throughput FlashBlade NFS target

Post by glamic26 »

Thanks @nitramd, I should have mentioned that I have all repositories unrestricted on concurrent tasks.

Thanks @HannesK. I have read that guide and fairly certain I am applying all of the best practice suggestions. That is interesting to know about V10 and NFS repositories, thanks for the insight.

Is V10 a 2019 release target?
foggy
Veeam Software
Posts: 21070
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Best practices for high throughput FlashBlade NFS target

Post by foggy »

Is V10 a 2019 release target?
Yes.
Post Reply

Who is online

Users browsing this forum: Baidu [Spider], Bing [Bot], kosarevp, Mildur, Nick_kok and 88 guests