Comprehensive data protection for all workloads
Post Reply
pirx
Veteran
Posts: 573
Liked: 75 times
Joined: Dec 20, 2015 6:24 pm
Contact:

SOBR with multiple high density server

Post by pirx »

As I have only worked with SOBR based on SMB shares with gateway servers involved, I'm thinking how this will work with high density server. Having such servers let you run repository and proxy role on the same host, right? I think this is a good thing especially for performance.

Let's say you want to have a scale out approach and use not just 1 or 2 server, maybe 6-10. Each has only limited disk space, lets say 2 filesystems with 100TB. I can add all those as SOBR extents, no difference to SMB shares.

How is the dataflow then? For the 2 performance extents of each server I would set proxy affinity to the host itself, so source and target data movers are on the same server (proxy = data mover 1, backup repository = data mover 2)? Or is this then just one data mover as this is on the same host? So there will be no network traffic from a proxy to a repository during backup? But we could still add proxy-only servers which then sends backups over network to the repository server. How much does this impact performance? I know the examples from the Apollo thread, I just don't know how much separating proxy / repo will have an impact.

https://helpcenter.veeam.com/docs/backu ... vmware.png

How does Veeam handle SOBR extents filling up? With blockcloning a chain (?) must always be located on the same extent. We had some bad experience with smaller extents as our backups jobs are 20-30TB.
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: SOBR with multiple high density server

Post by HannesK »

Hello,
I assume that you have a proper 10G or better network in place. If you want to make your life easy, then you just let cross-machine traffic happen (every machine is proxy & repository). The only reason for me to split proxy & repository role is Hardened Repository. There the Hardened Repository is a dedicated machine.

Sure, you can spend time on configuring proxy affinity and then have unused resources. I would go with default settings unless you see network bottlenecks.

The data flow is always the same: proxy -> network (includes localhost network) -> repository. See sticky forum FAQ post94869.html#p94869 - also https://helpcenter.veeam.com/docs/backu ... ml?ver=110 describes it.

100TB per extent sounds pretty small for me for your environment. Many customers go with 200-600TB or even more. We are block-cloning aware and place increments to the same extent. We show a warning if someone tries to configure performance mode.

Best regards,
Hannes
pirx
Veteran
Posts: 573
Liked: 75 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: SOBR with multiple high density server

Post by pirx »

The small extents are also my concern. With SAN storage this not an issue as we'd have much more storage in one system. I'm currently evaluating our different options. High density server sounds nice, but as backup target it just doesn't seem to fit our needs.

post410496.html#p410496
Gostev
Chief Product Officer
Posts: 31561
Liked: 6725 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: SOBR with multiple high density server

Post by Gostev » 1 person likes this post

On the other hand, smaller extents (by smaller I mean a few hundred TBs) forces you to play safe with the file system. Unless you want to be the first to try close to 1PB XFS extent, which I would personally love, because no one else tried that to date!
pirx
Veteran
Posts: 573
Liked: 75 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: SOBR with multiple high density server

Post by pirx »

You are right. I'm mean, no we do not want to test 1 PB reflink xfs... But I think it's a good idea to have a fs with a reasonable size. And I had an error in my numbers, even with the mentioned high density servers we would have 192 TB per fs when we use 8 TB disks. I counted each RAID 6 as a singel fs, but it's the RAID60 that is one fs.

Currently we have 2x 200 TB SOBR backup repositories at two locations, each with 2 x 100 TB extents. This is not optimal, even if there is 20 TB free, during syn operations free space shrinks to 4-5 TB and jobs fail. I guess this will not be an issue with blockcloning anymore.

On copy side we have 2 x 1,2 PB, with 12x 100 TB extents. This will be replaced be high density server with 2 x 384 TB RAIDs volumes. Not sure if 384 TB is a reasonable size for a single fs, it's a large failure domain.
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: SOBR with multiple high density server

Post by HannesK »

many customers of that size think that 384 TB is fine (since some years)

100 TB extents sound just like unnecessary work for me.
nitramd
Veteran
Posts: 297
Liked: 85 times
Joined: Feb 16, 2017 8:05 pm
Contact:

Re: SOBR with multiple high density server

Post by nitramd »

Gostev wrote: Apr 15, 2021 11:48 am On the other hand, smaller extents (by smaller I mean a few hundred TBs) forces you to play safe with the file system. Unless you want to be the first to try close to 1PB XFS extent, which I would personally love, because no one else tried that to date.
A 1PB XFS extent would be very cool. I'd like to have one but can't afford it!

Red Hat has certified XFS on RHEL 8 for 1PB.
https://access.redhat.com/documentation ... le-systems

@pirx
Don't forget about redundancy on your NAS. i.e. multiple power supplies, RAID controllers, network cards, CPUs, etc.

You can try to calculate the amount of usable (free disk) space on your proposed extents; a RAID calculator should help. I have a repo with 454TB of usable disk space after putting the disks in a RAID 6 array and adding the XFS filesystem. Calculation: divide usable disk space by raw capacity so 454TB/560TB equals .81 or 81% usable capacity. To make clear, I know the percentage after putting the disks in an array and installing LVM/XFS.
pirx
Veteran
Posts: 573
Liked: 75 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: SOBR with multiple high density server

Post by pirx »

Regarding redundancy, a high density server can also have multiple PSU's, RAID controllers, LAN/FC adapters. My main concern is the number of disks and that there usually is nothing compared to a distributed RAID.
Post Reply

Who is online

Users browsing this forum: Semrush [Bot] and 102 guests