Discussions related to using object storage as a backup target.
Post Reply
ashleyw
Service Provider
Posts: 208
Liked: 43 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

MinIO as an S3 compat backup target - impact on synthetic fulls

Post by ashleyw »

hi,

If this has been asked before please forgive me as we are late to the party on Object storage ;-)

We are currently running Rocky Linux with XFS MD raid 10 array (24x8TB spindles) as our primary performance tier media server, this has been running perfectly for a long time now and gives great throughput. We are currently running this as a hyper-converged management and backup layer (using PCI HBA controller pass-through in esxi to Rocky VM) but are switching to native tin to reduce our VMware socket count (as a direct result of the Broadcom takeover).

We've switched one of our replication servers to use MinIO and this seems to be working really well (2x12 sets with 2 parity drives per set) and we really like the concept of object stores and the advantages these give us.

If we switched our primary performance tier to an object store, is there any way we can leverage the similar storage/performance savings we currently get on synthetic fulls (using XFS backed file systems) or is the one negative aspect of using object stores?

thanks
Ashley
sfirmes
Veeam Software
Posts: 304
Liked: 146 times
Joined: Jul 24, 2018 8:38 pm
Full Name: Stephen Firmes
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by sfirmes »

Ashley, when using object storage as the performance tier target, you don’t have the ability to create synthetic full backups. You can however still create active full backups.

We use a process similar to block cloning to identify duplicate objects and referencing them via pointers to help reduce the storage required by a backup.

Hope this answers your question.

Steve
Steve Firmes | Senior Solutions Architect, Product Management - Alliances @ Veeam Software
ashleyw
Service Provider
Posts: 208
Liked: 43 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by ashleyw »

thanks Steve, make sense.
So we'll need to rethink our strategy on synthetics I guess (and the additional space and overhead required).

When you say you are using a similar process to block cloning, are you referring to the general way Veeam de-duping is working or something specific to object stores?

Its a shame there isn't a feature set on an object store that allows an object to consist of pointers to existing objects similarly to the way synthetic fulls on a COW file system works.
JaySt
Service Provider
Posts: 454
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by JaySt »

ashleyw wrote: Mar 07, 2024 3:59 am
We've switched one of our replication servers to use MinIO and this seems to be working really well (2x12 sets with 2 parity drives per set) and we really like the concept of object stores and the advantages these give us.
i'm researching object storage as a backup target more and more as well. Can you perhaps elaborate on the advantages you're seeing? just curious.

We too have alot of experience with XFS primary repositories and the topic on switching to object storage comes up a bit more lately. We're mainly looking into the restore performance difference compared to XFS. We're not too happy with XFS fragementation and low read-performance after using such repository for quite some time (1Y+ for example) and we're wondering if object storage is in some way better in providing higher restore performance after using it for a longer period.
Veeam Certified Engineer
ashleyw
Service Provider
Posts: 208
Liked: 43 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by ashleyw » 1 person likes this post

I'm researching object storage as a backup target more and more as well. Can you perhaps elaborate on the advantages you're seeing? just curious.
From our perspective some the advantages of object stores are (I'm sure there are many but there are some of our thoughts so far);
- rebuild time after disk failure much lower than traditional raid.
- ease of set up/repeatability (we are using docker deployed MinIO and with docker compose MinIO can easily be updated/reliably deployed.
- potentially better protection without sacrificing half disks in traditional raid10 and slowness of raid6 rebuild times.
- unmatched scalability/potential for easy clustering across multiple sites.
- potential for buckets to be replicated using MinIO tiering rather than using Veeam copy jobs.
- standardisation of transfer protocols to align to cloud native connectivity protocols.
- Flexibility to shift to other s3 compatible endpoints to leverage things like Wasabi/s3 glacier etc but being able to accurately estimate bills
- highly cost effective alternative to using public cloud. (e.g. cost of Dell server with 24x16TB disks depreciated over 36/48 months+racking+connectivity far lower than equivalent public cloud services) and eliminate egress/ingress charges and bill shock.
- modern elegant software architecture aimed at large scale usage with ability to dial in Prometheus monitoring.
- use of generic hardware avoiding vendor lock-in.

I guess we'll switch our primary target to MinIO and benchmark it, and if it performs well enough then we'll run it otherwise we'll revert to our current MD raid10 with XFS configuration.

We've worked long enough with ZFS and Linux raid so we are 100% happy with the inherent advantages of software based disk systems (we haven't used hardware based raid controllers for nearly a decade since we last used expensive and slow 3ware controllers).

I'm sure this will become a heated emerging topic though especially as solutions like MinIO become widely used to reduce dependency on public cloud (at scale) and deliver real savings to business.
ober72
Veeam Vanguard
Posts: 701
Liked: 138 times
Joined: Jan 24, 2014 4:10 pm
Full Name: Geoff Burke
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by ober72 »

sfirmes wrote: Mar 07, 2024 4:32 am Ashley, when using object storage as the performance tier target, you don’t have the ability to create synthetic full backups. You can however still create active full backups.

We use a process similar to block cloning to identify duplicate objects and referencing them via pointers to help reduce the storage required by a backup.

Hope this answers your question.

Steve
Hi Steve,

Could you not just create a weekly GFS for that purpose. Not scheduled synthetic full like before but essentially doing the same thing, if I am not mistaken.
Geoff Burke
VMCA2022, VMCE2023, CKA, CKAD
Veeam Vanguard, Veeam Legend
Gostev
Chief Product Officer
Posts: 31835
Liked: 7326 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by Gostev »

sfirmes wrote: Mar 07, 2024 4:32 amAshley, when using object storage as the performance tier target, you don’t have the ability to create synthetic full backups.
That's actually not a correct statement... synthetic fulls is what we do by default with object storage. There are not many other options to create those GFS fulls really, the ONLY other option would be doing active fulls, which are of course extremely inefficient in terms of performance and storage consumption.
ober72
Veeam Vanguard
Posts: 701
Liked: 138 times
Joined: Jan 24, 2014 4:10 pm
Full Name: Geoff Burke
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by ober72 »

Ok yes that makes sense. I was thinking in terms of if you had GFS enabled for longer periods say 6 Months, then in that case would it help to reduce storage if you added a weekly GFS instead of just having only the Monthlys.
Geoff Burke
VMCA2022, VMCE2023, CKA, CKAD
Veeam Vanguard, Veeam Legend
ashleyw
Service Provider
Posts: 208
Liked: 43 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by ashleyw » 1 person likes this post

I've found a few quirks with using an object store as the primary performance tier.
For years we've been running our virtualised 4 x linux proxies on 8xvCPU and 8GB of RAM without issues.
But now during the active full stage there the proxies were running out of RAM, and breaking the backups.
We were seeing random failures in the job corresponding to these types of error messages;
"Connection reset by peer Failed to upload disk '>' Agent failed to process method {DataTransfer.SyncDisk}."

Once we increased the ram to 16GB per proxy, we received no more failures in the jobs.

One thing we did notice is that on an Active full retry on a job, it now takes a long time before any data is processed.
On a 4TB job with 2 failures, it took18 minutes before the first byte of data was being ingested on the retry.
On a 36TB job with 35 failures, it took about 25 minutes before the first byte of data was being ingested on the retry.
The failures were all caused by the message above (it took us several hours while the job was running to narrow it down - hence the number of failures).
We can live with the delayed ramp up time on the retries though - especially now we know the root cause.

We are still crunching the numbers but we are seeing a significant reduction in the object store throughput compared to traditional raid10.
The object store starts to show promise when the level of parallelism hitting it is increased though.

Some of our performance could be optimised if we ran the Veeam proxy components on the linux host running MinIO itself (as this would allow the MinIO server direct access to the fibre connected all flash without he storage traffic having to traverse the networking stack outside the machine).
At this stage though we don't really want to break the "appliance model" of MinIO.
To keep things "simple" in our case we'd want to run multiple proxies via Docker compose so some effort would be required to figure out how to bind multiple proxy instances to different IP addresses on one machine (assuming this was technically possible).
If there is a Docker expert out there that could provide a reference template for running multiple Veeam proxies on a single linux host (with each proxy bound to a different host IP) with a simple single docker compose file, that would be fantastic for us to experiment further.

If we can figure out how to optimise the use of object stores as a primary performance tier target to match out previous throughput without having to dig deep for more VMware socket licensing then that it'll be a massive win.
thanks!
JaySt
Service Provider
Posts: 454
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by JaySt »

thanks for sharing your experience! great read, much appreciated!
Veeam Certified Engineer
ashleyw
Service Provider
Posts: 208
Liked: 43 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by ashleyw »

just to provide an update on all this.
We reinstalled out backup host with VMware ESXi and added it back into vcentre.
I then redeployed a Rocky Linux 8.9 VM with 96GB RAM and 8 vCPU cores with the same IP that I used for MinIO when it was on bare metal. (so that I wouldn't loose any of the data on the s3 buckets).
I configured the VM with PCI pass-through so I could pass-through the Dell HBA 330 controller directly to the VM.
I then vmotioned back the 4x 8GB ram Linux proxies onto the host.
Our throughput increased significantly as the hot add adds the disk directly using the 2x16GB fibre ports on that host, and then pushes it to the MinIO appliance thereby keeping all the traffic on the backup host rather than it traversing our 10GbE core.

Disappointingly though the throughput on Active full Backup run is running at about 485MB/s on a 22x8TB disk set.
The 22 disk set is split into 2x11 erasure sets in MinIO with 2 parity disks per raid set.

Code: Select all

# mc admin info minio1
●  ?????.com:9000
   Uptime: 3 hours
   Version: 2024-03-10T02:53:48Z
   Network: 1/1 OK
   Drives: 22/22 OK
   Pool: 1

Pools:
   1st, Erasure sets: 2, Drives per erasure set: 11

22 TiB Used, 1 Bucket, 12,110,496 Objects
22 drives online, 0 drives offline
When we ran the same configuration except the Rocky 8.9 VM was running linux md with the same hardware in a 22 disk raid 10 configuration, then the throughput on the same job was 979MB/s.

(interestingly during tests yesterday on tin we played with the SAN mode and found this to be roughly half the speed of HotAdd in our configuration - despite it being presented as the optimal solution).

So on raid 10 we loose half the capacity of the disks so we have usable storage of 11x8TB=88TB but it's twice as fast for Active fulls compared to a MinIO node running the same disk/ram configuration.

Using MinIO in our config means we loose 4 disks so we have usable capacity of (22-4)*8TB=144TB.
our new unit we will order will be kitted out with 24x16TB spindles, so we should end up with the following usable space;
radi10: (12*16)=192TB.
MinIO: (24-4)*16TB=320TB.

I'm not sure how well traditional raid 6 scales at these sorts of drive capacities/set sizes as rebuild times would likely be horrendous, (from past experience and a rebuild would likely put undue load on the remaining spindles increasing the risk of failure).

The other approach we could take is to go all flash/NVME on the primary backup unit and then use MinIO for storage tiering as secondary targets, but to go all flash on the primary unit would dramatically increase our operating costs.
I guess we could also sideways scale but again that would significantly impact our cost and in our case presents connectivity challenges as our backup unit is currently back to back connected into spare ports on our EMC Unity 680F all flash SAN.

Hopefully some of this might help others to consider some of the tradeoffs on using objects stores as a primary target.
ashleyw
Service Provider
Posts: 208
Liked: 43 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by ashleyw »

We are still seeing memory related failures during an active full run (but at a much slower rate) with MinIO, so we've throttled down the 4 proxies to 6 concurrent tasks each and increased the RAM on each up to 20GB. More importantly we are seeing severe degradation on the throughput to MinIO (by more than 10 fold) - its as if the object store/Veeam is slowing down on a large job as more objects get written to it.

We've seen similar behavior to this slow down (using Raid storage) before Veeam moved to the default of per machine backups (as this was the only way large backup jobs would scale) - so I can only assume that the internals of object store is aligned to the per job architecture rather than the per machine files architecture (as the per machine option is not available on backups to an object store).

It's looking like object stores are just not currently suitable as a primary target for Veeam.

So right as of now our options are to move back to Linux software Raid10 (using XFS over the top) or to run some tests using OpenZFS (yes I know this isn't officially supported but I'm aware of what needs to be done under the Veeam hood to enable reflink support).

Can anyone please confirm the behavior of object store backups and why there isn't a per machine backup files option (like the default for non-object stores).
Gostev
Chief Product Officer
Posts: 31835
Liked: 7326 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by Gostev »

No, this would be some MinIO-specific peculiarity then. We're not seeing anything like this in our performance testing labs on other object storage systems (but we don't have MinIO there). I suggest you to ensure your MinIO cluster is sized appropriately for the required load from compute resources perspective by consulting with the vendor.

There is no file system in object storage, so there are no backup files in principle - only a bunch individual objects (metadata and data blocks)... so the second part of the question simply does not make sense as it comes to object storage.
ashleyw
Service Provider
Posts: 208
Liked: 43 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by ashleyw »

thanks - we are definitely seeing a ten fold degradation on the job towards the end going to MinIO - and our backup set was about 30TB - which we don't consider overly large.

Problem is that the MinIO reference architecture repeatedly states that NVMe/SSD is by far the preferred option, but as you can imagine this isn't viable cost wise as a primary backup target for most customers - and only spinning rust can deliver the bang for buck we've all come to expect.
For whatever reason spinning disk backed object stores (atleast using MinIO) are not suitable for use currently as a primary target for Veeam and the throughput comes no where near a traditional software based raid 10 data set - which is disappointing but technically interesting.
ashleyw
Service Provider
Posts: 208
Liked: 43 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by ashleyw » 1 person likes this post

just to provide an update for anyone interested.
We have switched to OpenZFS and are striping across 4 vdevs;
We are now seeing a sustained Veeam throughput of 3-4GB/s on Active fulls which is far higher than any other approach (including Linux software raid10).
This is running a 64GB RAM, 8vCPU Rocky linux VM (with PCI pass-through of the hba330 controller) as our storage server.
We run an additional 4 proxies on that same VMware server (24GB RAM, 8vCPU each)
The VMware host has direct connectivity to the fibre layer running an EMC Unity 680F all flash array. (all our primary VM workloads are running off the 680F array).
Our total backup set size consists of about 45TB over 400VMs across 8 Dell blades and these are split over one main job and a 3 smaller ones.

So our take out is OpenZFS rocks in terms of sheer performance and is unbeatable with running on spinning rust.
Object stores (in the form of MinIO) have their strengths but not for a primary performance tier where capacity,throughput and most importantly cost take precedence.
The performance of hotadd and virtualised proxies far exceeded the performance of SAN mode (and doesn't have the same limitations as SAN mode).
Only negative side of this is that it still requires a VMware VVF license for the Backup node and under the Broadcom licensing that needs to be a minimum of 16 cores per socket but this is manageable for us in terms of costs and we had factored this into our cost models anyway.

so please Veeam, it'll be fantastic to get the latest release out that makes it easy for people to use reflinks on OpenZFS!

For people interested in this - this is what we did;
We stood up a Rocky 8.9 VM with 64GB ram, 8vCPU. and a 50GB boot disk (sda).
We have an SSD cache drive (standard 128GB as l2arc (sbb)
We then use the 22 spindles in 4 raidz1 vdevs (6 disk, 5 disk, 5 disk, 6 disk)
We use 23rd spindle as hot spare.

Code: Select all

* Install OpenZFS
# dnf install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm
# dnf config-manager --enable zfs-testing
# dnf install -y epel-release
# dnf install -y kernel-devel
# dnf install -y zfs
#reboot
# zfs --version
zfs-2.2.3-1

* Identify disks
# lsblk -d|grep disk
sda    8:0    0   50G  0 disk
sdb    8:16   0  128G  0 disk
sdc    8:32   0  7.3T  0 disk
sdd    8:48   0  7.3T  0 disk
sde    8:64   0  7.3T  0 disk
sdf    8:80   0  7.3T  0 disk
sdg    8:96   0  7.3T  0 disk
sdh    8:112  0  7.3T  0 disk
sdi    8:128  0  7.3T  0 disk
sdj    8:144  0  7.3T  0 disk
sdk    8:160  0  7.3T  0 disk
sdl    8:176  0  7.3T  0 disk
sdm    8:192  0  7.3T  0 disk
sdn    8:208  0  7.3T  0 disk
sdo    8:224  0  7.3T  0 disk
sdp    8:240  0  7.3T  0 disk
sdq   65:0    0  7.3T  0 disk
sdr   65:16   0  7.3T  0 disk
sds   65:32   0  7.3T  0 disk
sdt   65:48   0  7.3T  0 disk
sdu   65:64   0  7.3T  0 disk
sdv   65:80   0  7.3T  0 disk
sdw   65:96   0  7.3T  0 disk
sdx   65:112  0  7.3T  0 disk
sdy   65:128  0  7.3T  0 disk

* Create zvol
# zpool create VeeamBackup raidz1 sdc sdd sde sdf sdg sdh -f
# zpool add VeeamBackup raidz1 sdi sdj sdk sdl sdm -f
# zpool add VeeamBackup raidz1 sdn sdo sdp sdq sdr -f
# zpool add VeeamBackup raidz1 sds sdt sdu sdv sdw sdx -f
# zpool add VeeamBackup spare sdy -f
# zpool add VeeamBackup cache sdb -f
# zpool status
  pool: VeeamBackup
 state: ONLINE
config:

        NAME         STATE     READ WRITE CKSUM
        VeeamBackup  ONLINE       0     0     0
          raidz1-0   ONLINE       0     0     0
            sdc      ONLINE       0     0     0
            sdd      ONLINE       0     0     0
            sde      ONLINE       0     0     0
            sdf      ONLINE       0     0     0
            sdg      ONLINE       0     0     0
            sdh      ONLINE       0     0     0
          raidz1-1   ONLINE       0     0     0
            sdi      ONLINE       0     0     0
            sdj      ONLINE       0     0     0
            sdk      ONLINE       0     0     0
            sdl      ONLINE       0     0     0
            sdm      ONLINE       0     0     0
          raidz1-2   ONLINE       0     0     0
            sdn      ONLINE       0     0     0
            sdo      ONLINE       0     0     0
            sdp      ONLINE       0     0     0
            sdq      ONLINE       0     0     0
            sdr      ONLINE       0     0     0
          raidz1-3   ONLINE       0     0     0
            sds      ONLINE       0     0     0
            sdt      ONLINE       0     0     0
            sdu      ONLINE       0     0     0
            sdv      ONLINE       0     0     0
            sdw      ONLINE       0     0     0
            sdx      ONLINE       0     0     0
        cache
          sdb        ONLINE       0     0     0
        spares
          sdy        AVAIL

errors: No known data errors
There is a small "hack" that needs to be currently done to get reflink support on OpenZFS, but I don't want to document that on this forum, and it would be advisable to wait until Veeam officially release the unsupported hack.
tyler.jurgens
Veeam Legend
Posts: 411
Liked: 232 times
Joined: Apr 11, 2023 1:18 pm
Full Name: Tyler Jurgens
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by tyler.jurgens » 1 person likes this post

Bold move using so many drives in a raidz1. You risk losing your entire pool in the event of a raid rebuild burning out another disk, which is a high risk operation and is likely to happen. Its one of the main reasons no one recommends raid 5 anymore (raidz1 is essentially the same fault tolerance). Sure, you have the performance, but you run a large risk there. I don't even run raidz1 at home anymore for that exact reason.

As for Minio, Veeam is notoriously hard on Object Storage systems using any kind of erasure coding. You could try increasing the block size to 4 MB or 8 MB, but for an on-premises repository I'd go for an XFS repository over a single node Object Storage repo, no matter the brand you use. I don't see the benefits of using on-premises Object Storage unless you're going to go for a system that supports erasure coding to avoid a single node failure wiping you out. If your use case includes M365 backups, Object Storage would be preferred there as well.
Tyler Jurgens
Veeam Legend x3 | vExpert ** | VMCE | VCP 2020 | Tanzu Vanguard | VUG Canada Leader | VMUG Calgary Leader
Blog: https://explosive.cloud
Twitter: @Tyler_Jurgens BlueSky: @explosive.cloud
JaySt
Service Provider
Posts: 454
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by JaySt »

i'm still planning on testing out MinIO running on bare metal with a decent HW raidcontroller doing R6 of 12 HDDs. So a single node Object Storage target, not doing EC at all. Curious to see the throughput on that, but i have not taken the time to do so.
Veeam Certified Engineer
ashleyw
Service Provider
Posts: 208
Liked: 43 times
Joined: Oct 28, 2010 10:55 pm
Full Name: Ashley Watson
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by ashleyw »

That would certainly be an interesting test, but I think MinIO needs to access the raw disks (just like Linux MD and ZFS etc) so while it would definitely work, it's another abstraction layer on top of the underlying hardware raid. My guess is that caches on the hardware raid controller could potentially interfere with the MinIO write consistency so any crash/stall on the underlying raid (through a failing disk for example) could potentially interfere with MinIO operation.
I spoke with a development team earlier today that are all in on s3 endpoints (for health claims documents) and for this use case having a functionally equivalent AWS s3 endpoint in Azure/onprem in the form of MinIO helps them to be cloud agnostic rather than be locked into AWS s3 but this is more about data resilience and flexibility rather than out and out performance.
Currently MinIO as a single node high capacity performance tier for Veeam just doesn't seem to cut it unless our testing strategy is fundamentally wrong.
tyler.jurgens
Veeam Legend
Posts: 411
Liked: 232 times
Joined: Apr 11, 2023 1:18 pm
Full Name: Tyler Jurgens
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by tyler.jurgens »

JaySt wrote: Mar 14, 2024 9:04 am i'm still planning on testing out MinIO running on bare metal with a decent HW raidcontroller doing R6 of 12 HDDs. So a single node Object Storage target, not doing EC at all. Curious to see the throughput on that, but i have not taken the time to do so.
Don't build a raid and put Minio on top. Would be better to just give it the underlying disks.
Tyler Jurgens
Veeam Legend x3 | vExpert ** | VMCE | VCP 2020 | Tanzu Vanguard | VUG Canada Leader | VMUG Calgary Leader
Blog: https://explosive.cloud
Twitter: @Tyler_Jurgens BlueSky: @explosive.cloud
JaySt
Service Provider
Posts: 454
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by JaySt » 1 person likes this post

yes thats the general advice but isnt it a “it depends” decision ? If i dont want to scale past the single server in regards of capacity or compute power, reduced complexity by having the controller do all things protection would not be a bad idea. ObjectFirst came to the same conclusion with their appliances.

I guess im trying to say that the statement to always use EC can be challanged
Veeam Certified Engineer
JaySt
Service Provider
Posts: 454
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by JaySt »

ashleyw wrote: Mar 14, 2024 9:56 am Currently MinIO as a single node high capacity performance tier for Veeam just doesn't seem to cut it unless our testing strategy is fundamentally wrong.
Interesting. Especially after reading about your tests that lead up to this conclusion.

However, im not that sure about the other claims in regards to access to raw disks being a necessity for Minio in general. A while back, i asked on github for example whether or not the highwayhashing bitrot protection was still used by minio when configured to use a single underlying volume underneath(on raid). I got a reply it was still in place. So thats all i was looking for in lights of the use case being veeam backups and single node scalability limits are accepted.

Only performance testing is left to be done
Veeam Certified Engineer
tyler.jurgens
Veeam Legend
Posts: 411
Liked: 232 times
Joined: Apr 11, 2023 1:18 pm
Full Name: Tyler Jurgens
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by tyler.jurgens »

JaySt wrote: Mar 16, 2024 9:43 am yes thats the general advice but isnt it a “it depends” decision ? If i dont want to scale past the single server in regards of capacity or compute power, reduced complexity by having the controller do all things protection would not be a bad idea. ObjectFirst came to the same conclusion with their appliances.

I guess im trying to say that the statement to always use EC can be challanged
No. Minio is an EC object storage system. Object First uses an S3 gateway to front a RAID array. So both are S3 compatible, but the underlying storage redundancy is handled differently.
Tyler Jurgens
Veeam Legend x3 | vExpert ** | VMCE | VCP 2020 | Tanzu Vanguard | VUG Canada Leader | VMUG Calgary Leader
Blog: https://explosive.cloud
Twitter: @Tyler_Jurgens BlueSky: @explosive.cloud
JaySt
Service Provider
Posts: 454
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by JaySt » 1 person likes this post

Ok probably just semantics, but Minio SNSD setup (https://min.io/docs/minio/linux/operati ... drive.html) is what i meant.
Comparing it to Object First in a way that both solutions would rely on the underlying storage hardware for storage redundancy.
Veeam Certified Engineer
hubertbrychczynski
Lurker
Posts: 1
Liked: 1 time
Joined: Mar 05, 2024 2:29 pm
Full Name: Hubert Brychczynski
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by hubertbrychczynski » 1 person likes this post

tyler.jurgens wrote: Mar 19, 2024 4:25 pm Object First uses an S3 gateway to front a RAID array.
Hi there! Just to clarify: Object First uses direct mode to object storage, not gateway mode.
Gostev
Chief Product Officer
Posts: 31835
Liked: 7326 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by Gostev »

He didn't mean Veeam gateway. He means it's an S3 Server in front of a RAID array vs. in front of a more complex storage system with erasure coding. So less complexity with OF to achieve redundancy within a single appliance.
tyler.jurgens
Veeam Legend
Posts: 411
Liked: 232 times
Joined: Apr 11, 2023 1:18 pm
Full Name: Tyler Jurgens
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by tyler.jurgens »

Exactly right Gostev. Thank you for adding that!
Tyler Jurgens
Veeam Legend x3 | vExpert ** | VMCE | VCP 2020 | Tanzu Vanguard | VUG Canada Leader | VMUG Calgary Leader
Blog: https://explosive.cloud
Twitter: @Tyler_Jurgens BlueSky: @explosive.cloud
JustBackupSomething
Enthusiast
Posts: 32
Liked: 8 times
Joined: Feb 16, 2023 2:11 am
Full Name: Luke Marshall
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by JustBackupSomething »

Just adding to this convo. We also have seen poor performance for MinIO in larger XFS based setups (multi node in our case)
At least for now, ZFS with XFS zVols seems to be the way to go for both performance and reliability.

Some tools that could be useful
- Warp - S3 benchmarking tool
- mc - (from minIO)- using --trace --all will help get a much better understanding of what is happening "under the hood" of MinIO and what requests are being slow / taking time to complete.
- iostat - q length and wait times are a big stat to keep track of.
- zpool iostat

Using a raid controller puts a small "cache" in front of the disks. This can improve performance as it means the raid controller is the one making more optimised writes to disk. then resulting in better performance of the setup.

MinIO enforces operations to be synced. and will wait by default for operations to complete before moving on (as far as I understand). May need to do further digging on the MinIO forums / discussion board for more.

if using ZFS with MinIO on top is great, but can be hit and miss when the array underneath is swamped or used for something else as well. as per MinIO best practices, minio should have direct access to disks when possible (as above)

For any ZFS things, zfs iostat is great for debugging slow drives / poorly performing arrays. the -r option will give a better understanding of the size of each operation and how many (at least from the pool level)

It makes complete sense that MinIO takes much longer to complete operations as it needs to read / calculate blocks as it needs to read /write metadata to and from the store. I suspect this is similar to needing an index of each object or its metadata before writing new things.

Being HTTP as well, there is also the overhead of the request, first byte and list operations that can bog things down.

that all said, there was this post a few weeks ago that might help in the MinIO enterprise tier: https://min.io/product/enterprise/object-store-cache
sfirmes
Veeam Software
Posts: 304
Liked: 146 times
Joined: Jul 24, 2018 8:38 pm
Full Name: Stephen Firmes
Contact:

Re: MinIO as an S3 compat backup target - impact on synthetic fulls

Post by sfirmes »

I have been speaking with one of the top engineers at MinIO about this topic and he recommends for performance that xfs be used for the filesystem. He also mentioned using smaller erasure sets (2 x 8 vs 1 x16 for example) helps due to typical small object sizes that they see with VBR and VB365.

These are general recommendations and your environment(s)/use cases may require different settings.

And yes their new MinIO Enterprise Object Store looks promising and I will update this thread as we learn more about it's performance capabilities.

Thanks

Steve
Steve Firmes | Senior Solutions Architect, Product Management - Alliances @ Veeam Software
Post Reply

Who is online

Users browsing this forum: No registered users and 19 guests