What do you guys do to achieve huge repo's, not consisting of max. 64 TB extents in SOBR?
There is a build in limit in storage systems, vmware, linux, etc of a single disk/lun not being able to be larger than 64 TB.
I need to be able to get a repo of say 400 TB. Until now I've created several 64 TB repo's and combining them into a single huge SOBR. The issue is though, that if for some reason a synthetic full needs to be done on a smaller extent, the chance is that the increments are on another disk and the synth. full actually creates a new real full, hence I'm not able to profit from the fast cloning and spaceless full 'free' full backups.
Are there other techniques you guys use to get a single simple repo of > 64 TB? The scope being some storage system (or das), vmware and linux repo's.
I ran into this issue with a SOBR for immu, where I had a 60 TB and a 10 TB extent. The initial backups were done on the 60 TB, the 10 TB was added when we saw the space running out. To delete expired immu rp's daily we do daily synth fulls, which works like a charm when you have enough free space. But in this case the next synth full was created on the 10 TB disk, which made run that one out of space. For now I've set synth full to once per week and try to have not too many jobs do this on the same day to divide the growth. So if there's a way I can create a hugh simple repo I would not need to do all these tricks.
-
- Enthusiast
- Posts: 45
- Liked: 6 times
- Joined: Apr 07, 2021 10:07 am
- Full Name: Michael Riesenbeck
- Contact:
-
- Product Manager
- Posts: 9848
- Liked: 2607 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: How to beat the 64 TB limit?
I don‘t see that limit with a direct attached disks.
One of my backup repo is a HPE Apollo with a 300TB Raid50 Volume. Configured as a Linux Hardened Repo with Ubuntu. It works great so far.
One of my backup repo is a HPE Apollo with a 300TB Raid50 Volume. Configured as a Linux Hardened Repo with Ubuntu. It works great so far.
Product Management Analyst @ Veeam Software
-
- Enthusiast
- Posts: 45
- Liked: 6 times
- Joined: Apr 07, 2021 10:07 am
- Full Name: Michael Riesenbeck
- Contact:
Re: How to beat the 64 TB limit?
In this case it's a Dell storage cabinet connected though iSCSI. There was apparenty no way to get bigger luns into vmware than 64 TB.
-
- Product Manager
- Posts: 9848
- Liked: 2607 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: How to beat the 64 TB limit?
Yes, that could be the case.
I know about Netapp Ontap for example, that they have also a max size of 16TB per Volume/LUN.
Netapp Eseries with iscsi can go higher. I have tested once 150TB without any issues
For us, Linux hardened Repos are perfect. A standalone linux server with local disks. The immutable Flag with Veeam V11. And local disk repos are more stable and have surely a better performance as a iscsi storage connection.
I know about Netapp Ontap for example, that they have also a max size of 16TB per Volume/LUN.
Netapp Eseries with iscsi can go higher. I have tested once 150TB without any issues
For us, Linux hardened Repos are perfect. A standalone linux server with local disks. The immutable Flag with Veeam V11. And local disk repos are more stable and have surely a better performance as a iscsi storage connection.
Product Management Analyst @ Veeam Software
Who is online
Users browsing this forum: No registered users and 38 guests