Comprehensive data protection for all workloads
Post Reply
oykleppe
Lurker
Posts: 1
Liked: never
Joined: Jun 01, 2022 2:15 pm
Full Name: Øystein Kleppestrand
Contact:

Cleaning up a full repo

Post by oykleppe »

Hi

I have daily backups + a separate job with monthly backups to a Linux repo.
Due to bad planning I ran out of space on the monthly backup repo. And I'm not able to figure out how to solve this without wiping the backup.

Using v11, and not-per-machine.

The backup job consists of
1 old large vm where I have one 1 valid restore point. I want to export this one, but not keep it in the backup job. That should free up 1 TB.
1 vm that is not critical, so i can remove it from the job. This job is always failing anyway due to the free space issue.
5 vm's with 5 restore points that i want to keep.

My first thought was to export the large old vm to another drive or repo, and delete it afterwards from the backup. But from what i can read from documentation it is not possible to export to a different location. Deleting the VM from the backup job will not actually free up space without running a compact job first. And the compact job will temporarily need almost as much space as the current backup. And that disk is already almost full..

Is there a way to rescue these data?
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Cleaning up a full repo

Post by HannesK »

Hello,
and welcome to the forums.

If it's about that 7 VMs and deleting backups is not an option, then I would purchase an external disk with a few TB. Alternatively, create a scale-out-backup repository and copy all data to a cloud provider. Then clean up everything on-prem and start from scratch with per-machine chains.

As alternative to "export to different location", one could use a backup copy job.

Best regards,
Hannes
Post Reply

Who is online

Users browsing this forum: Semrush [Bot] and 90 guests