-
- Veteran
- Posts: 385
- Liked: 39 times
- Joined: Oct 17, 2013 10:02 am
- Full Name: Mark
- Location: UK
- Contact:
Manual health check?
Hi
I had a disk failure on one of my repositories, and this lead to some parity errors on the RAID6 that have now been fixed. I wish to health check all the Copy Jobs that are on it, there's an option to schedule a health check but can I start one manually on each job to check files?
M
I had a disk failure on one of my repositories, and this lead to some parity errors on the RAID6 that have now been fixed. I wish to health check all the Copy Jobs that are on it, there's an option to schedule a health check but can I start one manually on each job to check files?
M
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Manual health check?
Hi,
I think you can use either a SureBackup job for that, or Backup Validator. Also please check this thread as guys have written a PS script allowing you to run Validator against all backups.
Thank you.
I think you can use either a SureBackup job for that, or Backup Validator. Also please check this thread as guys have written a PS script allowing you to run Validator against all backups.
Thank you.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Manual health check?
I'd say AND instead of OR here, since Veeam Backup Validator tests whether the backup file itself was modified/corrupted after being created, while SureBackup ensures that VMs in it are actually recoverable.
Scheduling a health check, though, will allow to "fix" (re-transfer from source) the corrupt blocks, if those exist.
Scheduling a health check, though, will allow to "fix" (re-transfer from source) the corrupt blocks, if those exist.
-
- Veteran
- Posts: 385
- Liked: 39 times
- Joined: Oct 17, 2013 10:02 am
- Full Name: Mark
- Location: UK
- Contact:
Re: Manual health check?
OK thanks. I have now setup regular health checks, and I'll try the validator.
If I only have a few restore points in the copy jobs, will any corruptions end up being removed and overwritten anyway once retentions have been cycled? eg if I have 5 restore points, after 5 days the VBK will basically be totally re-written and new? Or will unchanged blocks still remain in the VBK because copy jobs are like synthetic fulls?
If I only have a few restore points in the copy jobs, will any corruptions end up being removed and overwritten anyway once retentions have been cycled? eg if I have 5 restore points, after 5 days the VBK will basically be totally re-written and new? Or will unchanged blocks still remain in the VBK because copy jobs are like synthetic fulls?
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Manual health check?
That depends. Consider the chainIf I only have a few restore points in the copy jobs, will any corruptions end up being removed and overwritten anyway once retentions have been cycled? eg if I have 5 restore points, after 5 days the VBK will basically be totally re-written and new? Or will unchanged blocks still remain in the VBK because copy jobs are like synthetic fulls?
F - i1 - i2 - i3 - i4 - i5
where F - full containing some corrupted blocks
i(n) - incrementals
If i1 contains newer blocks that overlap their corrupted versions from F then blocks from i1 will replace those from F during the merge. Moreover, in such case you can even restore your VM to points i1, i2, i3, i4, i5 but you cannot restore to F. In case i1 contains no newer version of F's corrupted blocks then merge will not heal F.
-
- Veteran
- Posts: 385
- Liked: 39 times
- Joined: Oct 17, 2013 10:02 am
- Full Name: Mark
- Location: UK
- Contact:
Re: Manual health check?
Good morning.
I've run validation on one job so far, and one of the VM's is found to be corrupt.
Statistic:
VM count: 26
Incomplete VM count: 0
Failed VM count: 1
Files count: 160
Total size: 4.8 TB
Validation failed.
The following VMs are corrupted:
1. 'XXXXXXX': File "XXXXXX_2-flat.vmdk" is corrupted. RLE decompression error:
[904352] bytes decoded to [972757] instead of [1048576].
Rather than delete the whole VBK from disk and reseed all of them, can I just delete this single VM?
Thanks
I've run validation on one job so far, and one of the VM's is found to be corrupt.
Statistic:
VM count: 26
Incomplete VM count: 0
Failed VM count: 1
Files count: 160
Total size: 4.8 TB
Validation failed.
The following VMs are corrupted:
1. 'XXXXXXX': File "XXXXXX_2-flat.vmdk" is corrupted. RLE decompression error:
[904352] bytes decoded to [972757] instead of [1048576].
Rather than delete the whole VBK from disk and reseed all of them, can I just delete this single VM?
Thanks
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Manual health check?
You can expand this backup (under Backups node), right-click this particular VM and select Remove from disk command. Corresponding VBK file blocks will be marked as free and will be re-used by the job in future. Full data for this VM will be re-transferred during the next job cycle.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Manual health check?
However, re-scheduling the health check so that it occurs during the next cycle looks more optimal, since only blocks to replace the corrupt ones will be sent in this case.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Manual health check?
Also there is an option "Remove deleted VMs data from backup after X days" in "Backup copy job" settings.
-
- Veteran
- Posts: 385
- Liked: 39 times
- Joined: Oct 17, 2013 10:02 am
- Full Name: Mark
- Location: UK
- Contact:
Re: Manual health check?
A question, does Validation tasks use up repository disk space when they are running, does each VM get extracted somewhere to test ?
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Manual health check?
Additional space is not required.
-
- Veeam ProPartner
- Posts: 208
- Liked: 28 times
- Joined: Jun 09, 2009 2:48 pm
- Full Name: Lucio Mazzi
- Location: Reggio Emilia, Italy
- Contact:
[MERGED] Repository crash, how to check backup files
The backup primary repository (local disk, 38 TB) crashed due to a RAID problem and is being rebuilt.
Now chkdsk of the volume is showing several errors (corrupt attribute records, file record segments orphaned). I haven't run chkdsk /f yet but will do as soon as the rebuild completes.
On the volume there are many backup file chains, mostly from backup jobs and some from a remote backup copy job.
I was wondering what is the best method to check if the files are still good. Is Veeam Backup Validator a valid tool in this case? And, would a surebackup job make the validator check redundant or would it be better to run both when possible?
Now chkdsk of the volume is showing several errors (corrupt attribute records, file record segments orphaned). I haven't run chkdsk /f yet but will do as soon as the rebuild completes.
On the volume there are many backup file chains, mostly from backup jobs and some from a remote backup copy job.
I was wondering what is the best method to check if the files are still good. Is Veeam Backup Validator a valid tool in this case? And, would a surebackup job make the validator check redundant or would it be better to run both when possible?
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Manual health check?
Hi,
Your post has been merged - please see the discussion above.
P.S.
Thank you.
Your post has been merged - please see the discussion above.
P.S.
So if you want to verify every backup in the chain then I think that would be better to use the approach described in this thread.foggy wrote:SureBackup job always fires up VMs both in application group and from the linked backup jobs from the latest restore point. The only option you have here is to specify the date to start the whole SureBackup environment closer to: right-click the job, select Start To in the short-cut menu, and select the desired date and time.
Thank you.
-
- Veeam ProPartner
- Posts: 208
- Liked: 28 times
- Joined: Jun 09, 2009 2:48 pm
- Full Name: Lucio Mazzi
- Location: Reggio Emilia, Italy
- Contact:
Re: Manual health check?
Not sure what you mean with "schedule health checks". I know it's possible with backup copy jobs, but what about regular backup jobs? Is this what the "Integrity" check box under Storage->Advanced->Advanced does?
I understand this check is different from what the Validator does?
I understand this check is different from what the Validator does?
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Manual health check?
Yes, that's possible with powershell script.I know it's possible with backup copy jobs, but what about regular backup jobs?
Correct. The integrity check verifies only storage metadata in the backup file while backup validator can be used to check a backup after some consistency problems on the storage.I understand this check is different from what the Validator does?
-
- Veeam ProPartner
- Posts: 208
- Liked: 28 times
- Joined: Jun 09, 2009 2:48 pm
- Full Name: Lucio Mazzi
- Location: Reggio Emilia, Italy
- Contact:
Re: Manual health check?
Ok, so with "scheduled health check" you mean to run the Validator against all jobs; and the Health Check option of the Backup Copy jobs does the same of the Validator. Am I getting this right?
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Manual health check?
When Veeam saves a new restore point to a repository, it calculates checksums for all data blocks in the backup file, and stores the checksums in the backup file.
Health-check and validator do the same thing - both recalculate checksums for data blocks and compare them against the checksums that are already stored.
Health-check: does that for all data blocks in the latest restore point.
Validator: does that for all data blocks in the backup chain.
Health-check and validator do the same thing - both recalculate checksums for data blocks and compare them against the checksums that are already stored.
Health-check: does that for all data blocks in the latest restore point.
Validator: does that for all data blocks in the backup chain.
-
- Veeam ProPartner
- Posts: 208
- Liked: 28 times
- Joined: Jun 09, 2009 2:48 pm
- Full Name: Lucio Mazzi
- Location: Reggio Emilia, Italy
- Contact:
Re: Manual health check?
Pavel, sorry to keep bugging you, but you imply that health check and Validation are two different things.
However, you say:
However, you say:
The script you reference does nothing else than run the Validator against all jobs. I still don't see how I can schedule a health check (=different from Validation) then.PTide wrote:Yes, that's possible with powershell script.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Manual health check?
There is no such thing for backup as health check since health check operation compares source backup (original backup) and target backup (copy job backup). In your case you need to use validator (automated with script) in order to confirm that your original backup files are good. Health check can be scheduled for backup copies only and will confirm that they are actually copies of your original backup.I still don't see how I can schedule a health check
-
- Enthusiast
- Posts: 46
- Liked: 7 times
- Joined: Dec 04, 2013 8:13 am
- Full Name: Andreas Holzhammer
- Contact:
[MERGED] Request: Schedule Compact/Healthcheck independently
Hi,
I'm trying to find an ideal solution for scheduling backup and backup copy jobs in VEEAM9.
My scenario:
~50 VMs with ~10TB Data in total, Backup and Backup Copy jobs
I have set up 6 forever forward Backup jobs with equal backup size, each job is ~1TB on disk on one single RAID10.
The jobs are staggered by 1-2 hours, depending on the average run time of each Backup job and its Copy Job.
Each Backup Job is complemented by a Backup Copy job which is started right after the Backup. These jobs flow to three RAID5 Arrays.
I'm quite happy with this setup so far.
But I also need to schedule frequent healthchecks and compact jobs. Unfortunately these are tied into the Backup and Backup Copy jobs, run either before or after the backup taks, and influence the I/O load on the storage during the backup window.
I'd perfer to be able to run these outside my backup window, i.e. during the day or over the weekend, when there is plenty of I/O to spare.
Would this be possible in an upcoming Veeam 10?
Regards,
Andreas
I'm trying to find an ideal solution for scheduling backup and backup copy jobs in VEEAM9.
My scenario:
~50 VMs with ~10TB Data in total, Backup and Backup Copy jobs
I have set up 6 forever forward Backup jobs with equal backup size, each job is ~1TB on disk on one single RAID10.
The jobs are staggered by 1-2 hours, depending on the average run time of each Backup job and its Copy Job.
Each Backup Job is complemented by a Backup Copy job which is started right after the Backup. These jobs flow to three RAID5 Arrays.
I'm quite happy with this setup so far.
But I also need to schedule frequent healthchecks and compact jobs. Unfortunately these are tied into the Backup and Backup Copy jobs, run either before or after the backup taks, and influence the I/O load on the storage during the backup window.
I'd perfer to be able to run these outside my backup window, i.e. during the day or over the weekend, when there is plenty of I/O to spare.
Would this be possible in an upcoming Veeam 10?
Regards,
Andreas
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
-
- Influencer
- Posts: 19
- Liked: never
- Joined: Jun 15, 2016 8:32 am
- Full Name: Infrastructure Team
- Contact:
[MERGED] Compact of Full Backup
Hi guys,
Can I raise a feature request:
I want to be able to schedule "Compact of full backups" separately from my backup jobs.
Currently "Compact of Full Backup" seriously impacts our backup scheduling as we have a scheduling window which we can complete disk backups in comfortably, however when we copy this data to tape we are against a tight window. When "Compact of Full Backup" runs (for several hours) this extends this initial step significantly, our tape jobs fail to complete within schedule etc.
An issue I have identified recently is where the "Compact of Full Backup" occurs outside of our scheduled window (Scheduled for Sunday but happens Monday night this week) which I assume is some form of automated attempt at housekeeping.
Thanks
Owen
Can I raise a feature request:
I want to be able to schedule "Compact of full backups" separately from my backup jobs.
Currently "Compact of Full Backup" seriously impacts our backup scheduling as we have a scheduling window which we can complete disk backups in comfortably, however when we copy this data to tape we are against a tight window. When "Compact of Full Backup" runs (for several hours) this extends this initial step significantly, our tape jobs fail to complete within schedule etc.
An issue I have identified recently is where the "Compact of Full Backup" occurs outside of our scheduled window (Scheduled for Sunday but happens Monday night this week) which I assume is some form of automated attempt at housekeeping.
Thanks
Owen
-
- Influencer
- Posts: 19
- Liked: never
- Joined: Jun 15, 2016 8:32 am
- Full Name: Infrastructure Team
- Contact:
Re: Request: Schedule Compact/Healthcheck independently
Hi,
Thanks for the links. I have already digested this prior to posting.
The first link provided leaves me assuming that the compact operation is entirely dependent on the backup job being triggered the same day that the compact is scheduled, since there is no information in the docs (links provided) indicating either way if this is the case. As in my personal experience, I DONT want this occuring in the middle of a backup process (process for me being backup to disk, then dupe to tape).
Regards
Owen
-
- Product Manager
- Posts: 20406
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Manual health check?
Currently, that is not possible. However, how often do you run compact full operation? What schedule does you tape job have? Thanks.I want to be able to schedule "Compact of full backups" separately from my backup jobs.
-
- Influencer
- Posts: 19
- Liked: never
- Joined: Jun 15, 2016 8:32 am
- Full Name: Infrastructure Team
- Contact:
Re: Manual health check?
I run backups Monday to Saturday to disk.
I also run to tape Monday to Friday (no operators around to swap tapes Saturday)
I have limited windows for compactingbackups, hence this feature request.
When I had compacting enabled in my jobs, scheduled for Sunday, although backups were only scheduled for Mon-Sat, the compacting would kick in on Monday nights which was not helpful or expected. The docs are less than clear about behaviours etc of compacting around dependencies of backup job schedule must match compacting schedule.
Im not the only person affected by this and believe this is a real issue for the wider community as this doesnt just affect the disk jobs, but in its current implementation has roll on impacts to tape jobs which isnt ideal to say the least.
Right now I have rescheduled this to run on Saturdays after my saturday disk job, as I dont have a saturday tape job to be affected. This is a change I have only made this week so Im waiting for the weekend to validate this works as expected. If thats the case, Ill likely update this to run the last saturday of each month.
Im aware that scheduling this outside of the disk job isnt currently possible, but can you confirm this has been logged as a request please?
Many thanks
Owen
I also run to tape Monday to Friday (no operators around to swap tapes Saturday)
I have limited windows for compactingbackups, hence this feature request.
When I had compacting enabled in my jobs, scheduled for Sunday, although backups were only scheduled for Mon-Sat, the compacting would kick in on Monday nights which was not helpful or expected. The docs are less than clear about behaviours etc of compacting around dependencies of backup job schedule must match compacting schedule.
Im not the only person affected by this and believe this is a real issue for the wider community as this doesnt just affect the disk jobs, but in its current implementation has roll on impacts to tape jobs which isnt ideal to say the least.
Right now I have rescheduled this to run on Saturdays after my saturday disk job, as I dont have a saturday tape job to be affected. This is a change I have only made this week so Im waiting for the weekend to validate this works as expected. If thats the case, Ill likely update this to run the last saturday of each month.
Im aware that scheduling this outside of the disk job isnt currently possible, but can you confirm this has been logged as a request please?
Many thanks
Owen
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Manual health check?
Yes, every feature request posted on the forum is accepted.
Btw, your understanding of compact scheduling is correct, it is performed after applying retention at the end of the successful job cycle.
Btw, your understanding of compact scheduling is correct, it is performed after applying retention at the end of the successful job cycle.
-
- Expert
- Posts: 124
- Liked: 22 times
- Joined: Jul 30, 2015 7:32 pm
- Contact:
[MERGED] Feature Request - Separate Maintenance Schedule
I would like to see an option to have a separate schedule for when maintenance tasks run, such as Merge, Compact, and Health Check.
I have 17 jobs that run overlapping each other. As I started moving towards Forever Forever Incremental, when a job started the merge process, the backup speed for other jobs slowed down. Then another job would finish and start merging and all remaining backups would slow again, rinse, repeat. Even worse when one starts a Compact operation. I have my Compacts separated out so only one job will do a compact per day.
Ideally, I would like to hold off on all of these types of tasks until a set time, such as 3:00 am when all of my backups are done, and then let the merges go to town, when they won't impact the backup window. Would want an option to do the sequentially, or to let them overlap.
Anyone else see this as helpful?
I have 17 jobs that run overlapping each other. As I started moving towards Forever Forever Incremental, when a job started the merge process, the backup speed for other jobs slowed down. Then another job would finish and start merging and all remaining backups would slow again, rinse, repeat. Even worse when one starts a Compact operation. I have my Compacts separated out so only one job will do a compact per day.
Ideally, I would like to hold off on all of these types of tasks until a set time, such as 3:00 am when all of my backups are done, and then let the merges go to town, when they won't impact the backup window. Would want an option to do the sequentially, or to let them overlap.
Anyone else see this as helpful?
-
- Veteran
- Posts: 385
- Liked: 39 times
- Joined: Oct 17, 2013 10:02 am
- Full Name: Mark
- Location: UK
- Contact:
Re: Manual health check?
I have enabled`Storage-level corruption guard` on copy jobs once a month. But since v9 I can also enable on primary jobs too, the trouble is my primary jobs are daisy chained to run after each other, so if I schedule a health check on job1 all the others get pushed back, which then also pushes the offsite copy jobs back too. It would be good if corruptions checks had their own schedule or a manual trigger (like when a HDD dies and rebuilds or a raid parity error gets auto corrected)
-
- Expert
- Posts: 124
- Liked: 22 times
- Joined: Jul 30, 2015 7:32 pm
- Contact:
-
- Novice
- Posts: 3
- Liked: never
- Joined: Oct 26, 2016 7:03 pm
- Full Name: Justin Price
Re: Manual health check?
+1 to the feature request for separate health checks & compact operations. Would also like to be able to run these on the weekend, during the day, separate from our normal nightly processing window. Thanks!
Who is online
Users browsing this forum: anthonyspiteri79, Bing [Bot], Majestic-12 [Bot] and 280 guests