Hi,
we have a customer that uses VBR v5 for replication. Last week, one of the incrementals was corrupt, the next ones were good again (the GUI told us so).
I know that you can revert to an older restore point if your last one is corrupt but what about the situation where one of the incrementals in the middle is corrupt? As far as I understand, all newer vrb files in line are needed to revert to a restore point. So let's assume th following situation:
today: full - ok
today-1: rev incr - ok
today-2: rev incr - ok
today-3: rev incr - failed/corrupt
today-4: rev incr - ok
today-5: rev incr - ok
If I want to revert back to today-5, all vrb from the last 5 days have to be applied to the full but today-3 is corrupt. Does ist work or do I only have the ability to revert back to today-2 ?
Regards,
Oliver
-
- Service Provider
- Posts: 22
- Liked: never
- Joined: Mar 17, 2011 10:53 am
- Full Name: Oliver Krehan
- Contact:
-
- Chief Product Officer
- Posts: 31805
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Replication with corrupt incrementals in between
Hi Oliver, if reversed increment is corrupt then you cannot failover beyond that. I am not sure however how it is possible to end up in the above situation (any failed/corrupt restore point will be removed by the following job pass). Unless the corruption is on disk level, and was introduced after the restore point was created. Thanks.
-
- Service Provider
- Posts: 22
- Liked: never
- Joined: Mar 17, 2011 10:53 am
- Full Name: Oliver Krehan
- Contact:
Re: Replication with corrupt incrementals in between
Hi Anton,
good to hear. What we saw at the customers site was a replication job that ran some weeks. Last week some of their developers made heavy changes in the VM during the snapshot phase so the disk ran out of space. They stopped the replication job but the properties of the replica told that the state was OK. The next replication run took 7h and showed OK state, too but replication size and data size in the properties was 0KB. The next replication run showed 9,41TB of data size but the job never ended so we had to kill it.
Only solution was to delete all replicas and restart the replication chain from the beginning.
So the question was if we hit the problem mentioned above (a corrupted restore point in between rendered the whole chain unusable) or any other problem.
Regards,
Oliver
good to hear. What we saw at the customers site was a replication job that ran some weeks. Last week some of their developers made heavy changes in the VM during the snapshot phase so the disk ran out of space. They stopped the replication job but the properties of the replica told that the state was OK. The next replication run took 7h and showed OK state, too but replication size and data size in the properties was 0KB. The next replication run showed 9,41TB of data size but the job never ended so we had to kill it.
Only solution was to delete all replicas and restart the replication chain from the beginning.
So the question was if we hit the problem mentioned above (a corrupted restore point in between rendered the whole chain unusable) or any other problem.
Regards,
Oliver
Who is online
Users browsing this forum: Bing [Bot], Marius Grecu and 107 guests