Within the Backups > Disk > [job] > [properties] we can see the backup chains.
One of our folks went and tripled the disk usage on our SQL server with temporary data, which in turn made a rather substantial impact on our backups. To help mitigate the fact that the repo is now full, I turned off synthetic fulls and am running a script that literally just runs the job 8 times in a row, which will in my mind push the oldest synthetic full backup out of existence. Obviously at a cost of RPO.
Which got me to thinking. Could I just delete the oldest VBK and all of the VIBs that are between it and the next oldest VBK, then resync the repository? And if I could do that, would it be worth Veeam's effort to create a wizard that would allow one to delete entire synthetic chains?
-
- Expert
- Posts: 203
- Liked: 34 times
- Joined: Jul 26, 2012 8:04 pm
- Full Name: Erik Kisner
- Contact:
-
- Veteran
- Posts: 636
- Liked: 100 times
- Joined: Mar 23, 2018 4:43 pm
- Full Name: EJ
- Location: London
- Contact:
Re: Feature Request - Chain Manipulation
We had a similar issue to this quite recently. It has caused a huge amount of disruption to our backup.
At present I think the best you can do is to avoid using Scale-out repositories so if you do get a runaway job it only blocks out one repository and does not traverse volumes and stop all your backups. Just the backups on one of your repositories. If you only have one repository you could split it and isolate problem jobs on their own volume.
I'd say rather than asking for a feature which can help with the clearup... a better feature would be one which prevents a runaway job from disrupting regular backups in the first place. So if you could have a section somewhere within the job settings that can detect a runaway job and limit it so it does not flood your storage and stop all the other backups.
I'd imagine settings would be things like the ability to terminate the job if the increment will be more than xx% of the previous increment. Or this job can use a maximum of xx% of available storage (i.e. quotas)
In our case our fileserver started backing up from scratch and used all the storage. All the other jobs failed and it took a long time to recover due to the large sizes of our backups.
It's quite common in day-2-day administration of server environments to find a large volume of new unexpected data appearing without warning which is within the scope of an existing backup job.
At present I think the best you can do is to avoid using Scale-out repositories so if you do get a runaway job it only blocks out one repository and does not traverse volumes and stop all your backups. Just the backups on one of your repositories. If you only have one repository you could split it and isolate problem jobs on their own volume.
I'd say rather than asking for a feature which can help with the clearup... a better feature would be one which prevents a runaway job from disrupting regular backups in the first place. So if you could have a section somewhere within the job settings that can detect a runaway job and limit it so it does not flood your storage and stop all the other backups.
I'd imagine settings would be things like the ability to terminate the job if the increment will be more than xx% of the previous increment. Or this job can use a maximum of xx% of available storage (i.e. quotas)
In our case our fileserver started backing up from scratch and used all the storage. All the other jobs failed and it took a long time to recover due to the large sizes of our backups.
It's quite common in day-2-day administration of server environments to find a large volume of new unexpected data appearing without warning which is within the scope of an existing backup job.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Feature Request - Chain Manipulation
Yes, you can do that. After that you can remove deleted records from UI with the help of remove missing restore points functionality.
Who is online
Users browsing this forum: 00ricbjo, Baidu [Spider], d.artzen, Google [Bot], m.levisson, ottl05, Semrush [Bot] and 156 guests