Discussions related to using object storage as a backup target.
Post Reply
pirx
Veteran
Posts: 631
Liked: 97 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Wasabi/CapacityTier: permanently growing buckets

Post by pirx »

We are having - again - the issue that backup data on capacity tier seems not to get deleted. We had this a couple of years ago on ASW S3 and deleted in a very time consuming process together with Veeam support >100TB. Now we see something similar on Wasabi. And this is getting expensive.

We have a retention of 10 weeks + 2 weeks immutability + 10 days block generations (?). That are ~100 days (3 months) the backups should be kept and then deleted on capacity tier. We are also seeing a reduction in VMs that needs to be backed up. After 3-4 months we should see that the utilisation does not grow constantly. I also don't see see this growth on performance tier.

There are no orphaned backups on capacity tier. If I check the objetcs with aws cli I get many from before 2024-11. But as I am not an object storage expert, I can't trace the objects back to backups.

What happens in Veeam when there is still data on a bucket that is not associated with any backup in Veeam DB? What options do I have to find data that should not be there anymore?


#1 bucket was created February 2024

Image

#2 bucket was recreated in September as we had inconsistencies because of Wasabi API handling and they could not be solved by Veeam and Wasabi support. We had to recreate bucket and upload everything again.

Image
david.domask
Veeam Software
Posts: 2838
Liked: 650 times
Joined: Jun 28, 2016 12:12 pm
Contact:

Re: Wasabi/CapacityTier: permanently growing buckets

Post by david.domask »

Hi pirx,

Thank you for sharing the details and sorry to hear about the challenges.

Do you have a Support Case to review this behavior already? If not, please create a case and be sure to include logs for Support to review, as well as details on your analysis that determined there were orphaned blocks.

The block removal is done as part of a Checkpoint removal. Rather than removing individual blocks, blocks are grouped under a logical "checkpoint", and as the checkpoint expires, it and blocks underneath the checkpoint are removed (assuming immutability period has passed, if still immutable, it will be checked on subsequent runs to see if it can be removed)

From just the above, I'm not quite sure I can point to any "smoking gun" for orphaned blocks, but it's best to work with Support on this as they will be able to tell more from the debug logs.

Thanks!
David Domask | Product Management: Principal Analyst
pirx
Veteran
Posts: 631
Liked: 97 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Wasabi/CapacityTier: permanently growing buckets

Post by pirx »

Ho David, I've not yet created a case as this will be very time consuming. As customer I just want let R&D be aware that the handling of capacity tier data still has much potential. In VBR and in Veeam One. For me it's really hard to get good data to work with. I have several cases the last years where support found unexpected leftover backups on capacity tier. This just seems to happen over an over again - and gets real expensive for us.

I now used following script from oleg.feoktistov to get some information about immutable backup on one SOBR

post485887.html#p485887

Not sure how I should interprete the result. Overall the there are 15500 backups in this SOBR capacity tier. ~3800 have a negative remaining immutability time. But the oldest CreationTimeUtc seems to match ~100 days and a lot of them even CreationTimeUtc within the past weeks (no sure about the negative immutability then, maybe it gets prolongt again).


Image

Image
david.domask
Veeam Software
Posts: 2838
Liked: 650 times
Joined: Jun 28, 2016 12:12 pm
Contact:

Re: Wasabi/CapacityTier: permanently growing buckets

Post by david.domask »

Hi pirx,

Thanks for the extra details -- barring that retention is set very long for these backups, I would agree it looks unexpected, but unfortunately for more specific information a case will be required. The checkpoint removal and retention are both logged quite dutifully, and we should be able to get more information (especially with the script output) about what happened during the retention/checkpoint removal.

Just a question though, do you see these backups in the UI under Backups > Object Storage? (you can check the file name in the Properties) That would mean Veeam is still aware of them, so understanding why they were not removed by retention would be the main focus.
David Domask | Product Management: Principal Analyst
pirx
Veteran
Posts: 631
Liked: 97 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Wasabi/CapacityTier: permanently growing buckets

Post by pirx »

The ones from the script I can see.

Image

I could manually delete the above backup.

13.02.2025 14:34:34 [xxxx - xxx4106] Backup has been removed successfully



There are 2 different things: backups that are outside the expected retention/immutability period and data on buckets that are not visible in Veeam. My cli dump of objects in the buckets showed even much older data.

I understand that in the end will have to open a support case. But I also think that Veeam should improve the transparency of data on capacity tier. I'm struggling with this for years now. Object Storage with its small objects (nearly 1.000.000 in out case for our largest buckets) is a complete black box. There is no filesystem. For performance tier, I just look in the filesystem, search for files older than xxx or size bigger than xxx and I immediately know what is going on. Here I'm 100% lost without support.

But maybe I'm just not aware of the right reports, scripts, tools. How can I debug this? I did not find much useful in VO reports.
veremin
Product Manager
Posts: 20733
Liked: 2401 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Wasabi/CapacityTier: permanently growing buckets

Post by veremin »

We're really sorry to hear that your issues with undeletable backups are still persisting. I understand that you've gone through multiple support cases, and they might be quite frustrating by now. However, without debug logs and a thorough investigation, it will be difficult for us to pinpoint the root causes of your problems.

If you still decide to open a case, we can promise to escalate it almost immediately to the R&D team for further investigation, so it won't remain at Tier 1 or Tier 2 levels for long.

Thank you for your understanding.
pirx
Veteran
Posts: 631
Liked: 97 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Wasabi/CapacityTier: permanently growing buckets

Post by pirx »

I've now created 07709011 as the growth is very unexpected and it will get very expensive.

Can anyone point be to a of a useful Veeam One report that shows the growth of a capacity tier other than "Scale-Out Backup Repository Configuration" which is limited to the last 4 weeks? I've searched all reports but I did not a very basic one that reports that.

From the 30 days there, I see the expected up/down on performance tier, but very stabel. On capacity tier, the only way is up (this chart/report I'd like to have for 12 months).

Currently I have only the data from Wasabi for longer time which is showing a more or less constant growth. Retention 70 days + immutability 14 days + block generations 10 days = should not be more than 100 days until backups are removed. We have no growth in the number of VMs (+-30 in one year with 1200 over all at this location) and I don't see that the overall data has increased. Even then, saturation should be reached at some point. Oldest backups based on the "Backups" view is from February which would match ~100 days (the above algorithm never really worked for us, it is 20 day more most of the time.

Veeam view for one SORB

Image

Wasabi view of the above capacity tier bucket which was created Sep. 2024

Image


Wasabi view of the other affected bucket that was created March 2024

Image
pirx
Veteran
Posts: 631
Liked: 97 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Wasabi/CapacityTier: permanently growing buckets

Post by pirx »

Case is still open. But after I updated to latest Veeam version, between 200-250 TB were removed from each of our main capacity tier bucketes within a few days. I somehow can not believe that this is just a coincidence. But there is also nothing in release notes that mentions a fix like this.
pirx
Veteran
Posts: 631
Liked: 97 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Wasabi/CapacityTier: permanently growing buckets

Post by pirx » 1 person likes this post

That's how it looks. I can't tell if the reason was a 'bug' in Veeam, but I can tell this was expensive and I had cases open for this before.

Image

Image
le0n
Novice
Posts: 3
Liked: never
Joined: Feb 22, 2021 10:40 am
Full Name: Leon Wear
Contact:

Re: Wasabi/CapacityTier: permanently growing buckets

Post by le0n »

Hi, I just wanted to say thanks for posting this.

We have been in a similar situation since our migration from VMware to Nutanix AHV - high churn, long immutability, Wasabi Capacity tier. Our issue was compounded from having removed the original vCenter but kept the jobs in a disabled state. Over the last few months our Capacity tier has been rising by 1tb+ per day.

Like yourself, we have had a long running support case to resolve the issue and have been anxiously waiting for immutability periods to end, hoping to see a large drop as per your most recent screenshots. The most recent patch was released on June 17, very soon after our case closed but hadn't been suggested. When it was released, we didn't applied it figuring it was a security hot-fix for people running a domain-joined stack. But there is this in the release notes:

"Background checkpoint removal process may lag behind the addition of new data due to poor deletion API call performance on certain on-prem object storage devices, causing continuous backup accumulation. To work around this issue, these API calls will now be called concurrently instead of sequentially."

Prior to the case, we hadn't been fully aware that there are different types of retention which apply given different configurations. Normal retention only applies if the source job is running to clean up its own files. Without the source job, then "background" retention applies and keeps 3+ Restore Points indefinitely. We had been in this situation, given we had removed vCenter but kept the jobs (disabled) while retention / immutability still applied. Initially we assumed this might be related to our increasing Capacity tier - though having deleted the jobs, thus enacting "Orphaned" retention, we have been seeing Restore Points removal from the Performance tier (no longer surfaced in the console) but have been waiting a long time to see corresponding drops in the buckets above background churn.

Actual retention with immutability can be hard to figure, an explanation we were offered is as follows. Due to object storage structure (metadata "block maps", block data), when immutability is configured, effectively all Restore Points under current retention are immutable. Any new object placed there – data block / metadata is set to be immutable for 90 days to begin with and will be extended later if required by the job logic, block reuse, etc. I.e. if there's object dependency on earlier data then that too remains immutable for the period required by the new data. This is not necessarily surfaced in the console. Depended upon data can persist long after it is gone from there.

It's also frustratingly hard to get a sense of what VBR is doing in the Capacity tier. We have noticed that logs in here: ".\ProgramData\Veeam\Backup\System\Retention" are the source of the VBR console's History > System > Background retention > Retention job - where you see "Failed to perform retention Error: Unable to delete backup in the Capacity Tier because it is immutable until...". It's hard to get an overview within the console but you can use tools like Agent Ransack to see where immutability still applies over long periods (especially since logs are often truncated from the console). One thing we noticed is that it seems, such attempts seem to made once (per-VM backups). It doesn't look like there are subsequent attempts after it fails initially to tidy up and is prevented by immutability.

We've applied the patch this afternoon and are already seeing this directory fill up with many more logs than usual:
'.\ProgramData\Veeam\Backup\System\CheckpointRemoval\2025-07-17\WasabiBucketName'

...Update 24hrs later we're down 20tbs.

Appreciate you having creating this thread. If we'd not seen it we'd still be waiting for orphaned job immutability to end -though I'm not sure we'd have seen a drop without the patch, given the above and your experience.

Have a great weekend.
Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest