Comprehensive data protection for all workloads
Post Reply
TropicOasis
Novice
Posts: 4
Liked: 1 time
Joined: Nov 16, 2020 4:49 pm
Full Name: Adam Chalmers
Contact:

Hardened Repos Not Spreading Backups Across Pure Storage LUNs

Post by TropicOasis »

NOTE: No ticket has been opened for this as I do not believe this is a Veeam issue. I'm asking for help because we're in a huge bind right now. We have 3 LUNs coming from 3 Pure Storage Arrays. There's speculation that once Array 1, LUN_001 which is at 92% capacity gets to 100%, that backups will be written to Array 2, LUN_002, after that fills up, then Array 3, LUN_003 will be written to. I'm not convinced that's going to happen. We are not using SOBR on the backup and replication servers. See below for the details.

We use SLES 15.3 servers as our hardened repos using the XFS file system. Two of our hardened repos are not writing backup data to all three Pure Storage LUNs, only one LUN in the volume group. We are using LVM. The LUNs are all over 200 TB (specifying this for a reason I'll explain later in this post), so for example...

Hardened Repo A has LVM enabled and 3x380 TB LUNs (volumes) that are presented to the server. Each LUN is coming from a different Pure Storage Flash Array C device. The hardened repo is only writing to one volume in the volume group. Our Linux admins said they set up the volume group to write in a balanced manner across the LUNs, but that's not happening. Here's what the volume group on Hardened Repo A currently looks like:

Pure Array 1, LUN_001 (380 TB): 92% full (only LUN that's getting written to)
Pure Array 2, LUN_002 (380 TB): 0% full
Pure Array 3, LUN_003 (380 TB): 0% full

The repos that have LUNs at 200 TB or less in their volume group don't appear to be having this problem. So I'm wondering if there's a maximum size threshold for each LUN in the volume group. Here are some things I'd like some answers/comments/constructive criticism/suggestions....

1. Is there a recommended LUN size or upper limit that I shouldn't be exceeding in order to balance writes across multiple LUNs?
2. Best practices in terms of what the volume group configuration on the hardened repos should be so they write across multiple LUNs.
3. If there is a limit on size, how do I add additional LUNs to the volume group to ensure balanced placement of data? For example, if the upper limit for a LUN is 200TB, then I will need to add 200 TB (or less) LUNs to the volume group from all three arrays.
4. If #3 above is true/required, what is the process if starting with 3x200TB LUNs in the volume group, fill them, and then add 3 more 200TB LUNs to the group?
5. Is there a best practice utilization if #4 is true. For example, when the 200TB drives get to 60% utilization add additional 3x 200TB drives?

Thanks in advance for your help.

Adam
HannesK
Product Manager
Posts: 14333
Liked: 2895 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Hardened Repos Not Spreading Backups Across Pure Storage LUNs

Post by HannesK »

Hello,
if I understood it correctly, then it is one LVM with 3x380TB with XFS. That's fine in general.

1. There are no known limits with XFS volumes. I heard 1.5PB is working fine. I have not heard any complaints
2. a striped LVM should do what you want. If not, I would check with SUSE.
3. see 1
4. see 1
5. see 1

Best regards,
Hannes
Post Reply

Who is online

Users browsing this forum: sykerzner and 54 guests