Discussions related to using object storage as a backup target.
Post Reply
DDIT
Expert
Posts: 147
Liked: 28 times
Joined: Oct 29, 2015 5:58 pm
Full Name: Michael Yorke
Contact:

Unwanted restore points in S3 Capacity Tier, how best to delete?

Post by DDIT »

Hello,

My backup job retention settings specify 12 months, 20 years, but somewhere in the past the VM was moved to another host and was recognised by Veeam as a new VM, hence a new full backup was taken and new chain. That's fine. However, as a consequence, the GFS policy was not applied to the backups which were copied to our capacity tier. In the capacity tier I now see monthlies from 2022, which I would like to delete, whilst keeping the last yearly backup from 2021. Screenshot: https://i.imgur.com/sKQVx7F.png

What is the recommended approach? Should I export the yearly backup individually (and in the export wizard select the same SOBR as the target repo?), then simply select 'delete from disk' to remove the monthly restore points?

Thanks.
Mildur
Product Manager
Posts: 8735
Liked: 2295 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Unwanted restore points in S3 Capacity Tier, how best to delete?

Post by Mildur »

Hello Michael

You can't delete specific restore points.
Exporting your yearly backup to the SOBR and then delete the other backups would work as a workaround. Please be aware of the size. If the backup is only in capacity tier, 2.4 TB will be downloaded from the capacity tier. This may cost you a lot of money for API calls and data transfer if you use a service like Azure or AWS.

Since V12, we have a background retention job build in to the product. That job would take care of removing the GFS backups from the capacity tier, after the GFS restore point is also removed from performance tier extend.

https://helpcenter.veeam.com/docs/backu ... iderations
[For backups stored in the capacity tier] Background retention job does not delete capacity tier copies of backup data directly. However, if background retention removes local copies of backups, they may also be marked for removal on capacity tier. In such a case, cleanup during the next SOBR offloading session will remove them from the capacity tier.
Best,
Fabian
Product Management Analyst @ Veeam Software
DDIT
Expert
Posts: 147
Liked: 28 times
Joined: Oct 29, 2015 5:58 pm
Full Name: Michael Yorke
Contact:

Re: Unwanted restore points in S3 Capacity Tier, how best to delete?

Post by DDIT »

Thanks Fabian,

I will go ahead and export to the SOBR, then delete the other backups.
If the backup is only in capacity tier, 2.4 TB will be downloaded from the capacity tier.
Correct, it only exists in the Capacity Tier. I'm using Wasabi. If the source of the export is the SOBR capacity tier and the destination is also the SOBR capacity tier, will that still download and upload the 2.4TB file? And will I need temporary space equal to that on my performance tier? or will it 'stream' the file?

Last question... The background retention job sounds useful. Is there anything I need to do to enable that, or is it automatic? Unfortunately, in this case, our GFS restore points were moved to capacity tier a long time ago.
Mildur
Product Manager
Posts: 8735
Liked: 2295 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Unwanted restore points in S3 Capacity Tier, how best to delete?

Post by Mildur »

You need the additional space of 2.4 TB on your performance tier and later on Wasabi as well.

Background retention is enabled automatically in v12 and later. But it doesn't work for capacity tier alone.
The original SOBR must still exist. When the background retention job removes the local backups, it marks the GFS backups in the capacity tier as deleted as well. The next offload session will then clear the obsolete backups.

Best,
Fabian
Product Management Analyst @ Veeam Software
DDIT
Expert
Posts: 147
Liked: 28 times
Joined: Oct 29, 2015 5:58 pm
Full Name: Michael Yorke
Contact:

Re: Unwanted restore points in S3 Capacity Tier, how best to delete?

Post by DDIT »

Thanks for clarifying.

I started the export today. It ran for a few hours and provides a useful summary of which virtual disk it is currently exporting and the progress of that disk. It successfully exported some of the virtual disks, but failed on the 5th disk. Error...

Code: Select all

19/01/2024 13:15:58 Error    Failed to export backup Error: Bad Data. Failed to call CryptDecrypt AesAlg failed to decrypt, keySet: ID: <redacted> (archive), keys: 1, repair records: 1 (master keys: <redacted>) Unable to retrieve next block transmission command. Number of already processed blocks: [103969]. Failed to download disk 'Data.vhdx'. Agent failed to process method {DataTransfer.SyncDisk}.


I hope this is a transient error, rather than a data error! I have created a support ticket #07097261
Post Reply

Who is online

Users browsing this forum: No registered users and 11 guests