Good morning, I'm running Veeam Backup & Replication 12.3.2.3617 with an NFR license in a home lab to maintain familiarity with Veeam products while waiting to use it again at work.
I've configured a second, compatible S3 repository on Infomaniak (Infomaniak Swiss Backup S3 bucket).
The bucket has a nominal 1TB capacity.
The job involves maintaining a full monthly copy and subsequent incrementals. When a new full version is created, it deletes the previous one and its child incrementals.
I back up a PVE host (predictable backup size) and my Windows workstation (unpredictable backup size, sometimes a few GB per day, sometimes as much as 50 GB).
Sometimes I've exceeded the TB limit and everything has crashed, preventing me from even deleting individual restore points. I had to delete the entire bucket and start over.
I'm wondering if it's possible to change the 1TB soft limit in future versions of VBR. Instead of 1TB increments, it would be more convenient to do it in 100MB increments. In addition to making it a sort of "hard limit," VBR checks how much space is available in the bucket and doesn't even attempt to transfer data if it doesn't have enough space.
I'm also wondering if there's a way to delete the compatible S3 bucket for individual backups without having to delete the entire job.
I'll try to explain my setup better:
I have three jobs: one for the Windows workstation, one for the PVE VMs, and one for the PVE host. All jobs save to a network drive.
I also have a Backup Copy job that takes the previous backups and copies them to the S3 bucket.
If I were to find I was running out of space, I'd have to delete the entire "subjob" with all the restore points.
Unlike the main NAS backup repository, where I can delete individual VIB/VBK files by going to "Files -> Repository -> Browse Folders."
I realize that a "classic" repository and an S3 bucket aren't comparable by nature, but if S3 restore points could be managed a little more easily, I think it would be more convenient for all users.
P.S. I use a NAS repository for long-term backups, an external USB hard drive for medium-term retention, and the S3 bucket in the cloud for short-term, just in case I lose my onsite backups due to theft or fire. Once I'm more familiar with S3 and have been using it for a year, I'll also start considering immutability to protect against ransomware.
Thanks for your attention. Sorry if I've asked something that's already been covered.
-
- Influencer
- Posts: 14
- Liked: 2 times
- Joined: Mar 15, 2024 9:32 pm
- Contact:
-
- Veeam Software
- Posts: 2867
- Liked: 659 times
- Joined: Jun 28, 2016 12:12 pm
- Contact:
Re: [Feature Request] Friendly S3 Bucket Management
Hi Mario,
Soft Limit: Just to ensure I'm understanding correctly, the S3 bucket itself is capped at 1 TB by the S3 provider, and the soft limit behavior where current job will complete but no further jobs work is what results in you needing to delete the bucket?
Is the 1 TiB limit adjustable on the S3 provider side after it's been reached? Pre-checking the space at first blush seems like a good idea, but with the space savings that occur, it can be quite difficult to predict the result, and I suppose many users would want exactly the opposite; allow the total pre-space savings amount to be written knowing that during backup the used space will be reduced to fit within the remaining space. Ultimately I think if we can understand your schema for bucket provisioning a bit more, it would help to advise better.
As for removing points more granularly, understood, but as you noted the structure for Object Storage is not the same as individual VBK/VIB files. One of Object Storage's biggest advantages over traditional storages is that the physical storage is abstracted away from the logical S3 buckets, so expanding the storage capacity should be fairly simple.
If you could explain your bucket provisioning strategy a bit more, I think that's probably the best way to approach this for now.
P.S. - You might try using Background Retention manually after lowering the retention on one of the jobs to free up space -- this does not require running the job itself and works independently of the job.
Soft Limit: Just to ensure I'm understanding correctly, the S3 bucket itself is capped at 1 TB by the S3 provider, and the soft limit behavior where current job will complete but no further jobs work is what results in you needing to delete the bucket?
Is the 1 TiB limit adjustable on the S3 provider side after it's been reached? Pre-checking the space at first blush seems like a good idea, but with the space savings that occur, it can be quite difficult to predict the result, and I suppose many users would want exactly the opposite; allow the total pre-space savings amount to be written knowing that during backup the used space will be reduced to fit within the remaining space. Ultimately I think if we can understand your schema for bucket provisioning a bit more, it would help to advise better.
As for removing points more granularly, understood, but as you noted the structure for Object Storage is not the same as individual VBK/VIB files. One of Object Storage's biggest advantages over traditional storages is that the physical storage is abstracted away from the logical S3 buckets, so expanding the storage capacity should be fairly simple.
If you could explain your bucket provisioning strategy a bit more, I think that's probably the best way to approach this for now.
P.S. - You might try using Background Retention manually after lowering the retention on one of the jobs to free up space -- this does not require running the job itself and works independently of the job.
David Domask | Product Management: Principal Analyst
-
- Influencer
- Posts: 14
- Liked: 2 times
- Joined: Mar 15, 2024 9:32 pm
- Contact:
Re: [Feature Request] Friendly S3 Bucket Management
Hi, the maximum bucket size is 1TB. Veeam sets the minimum limit to 1TB, but according to the documentation, it's not strict and Veeam can exceed it.
There's also a slight discrepancy between the free space indicated by Veeam and the bucket provider, about 50GB.
If the bucket fills to 100%, I can't do anything. Even accessing it with the official Cyberduck client, I can't delete or perform any operations. The only way to use it is to delete it completely and rebuild it.
I've now tried making it 900GB so if it fills up, I can add a few GB and get back to normal.
However, Veeam still believes it has 1TB of space because that's the minimum size.
If the soft limit could be set in multiples of 100GB, it would be more convenient for those with a "small" S3 repository.
There's also a slight discrepancy between the free space indicated by Veeam and the bucket provider, about 50GB.
If the bucket fills to 100%, I can't do anything. Even accessing it with the official Cyberduck client, I can't delete or perform any operations. The only way to use it is to delete it completely and rebuild it.
I've now tried making it 900GB so if it fills up, I can add a few GB and get back to normal.
However, Veeam still believes it has 1TB of space because that's the minimum size.
If the soft limit could be set in multiples of 100GB, it would be more convenient for those with a "small" S3 repository.
Who is online
Users browsing this forum: No registered users and 10 guests