-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Is there still a "need" for active fulls?
Hello,
We are just converting all our backups to a new REFS repo.
I wonder: is there still any reason we should do periodic active fulls (AF)?
All our backup storage uses DIF so bit rod should not be a problem on the hardware side. Even if it would be - with REFS and Integrity Streams there is a second protection against that.
And if i remember correctly AF would not help against CBT corruption as even with AF it uses CBT to see which blocks are used in the first place correct?
So i only see one scenario: It helps if Veeam has some kind of consistency bug... But i never heard of such a thing.
Markus
We are just converting all our backups to a new REFS repo.
I wonder: is there still any reason we should do periodic active fulls (AF)?
All our backup storage uses DIF so bit rod should not be a problem on the hardware side. Even if it would be - with REFS and Integrity Streams there is a second protection against that.
And if i remember correctly AF would not help against CBT corruption as even with AF it uses CBT to see which blocks are used in the first place correct?
So i only see one scenario: It helps if Veeam has some kind of consistency bug... But i never heard of such a thing.
Markus
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Is there still a "need" for active fulls?
Hello Markus.
There is still a probability of data corruption because of the storage-level corruption which can occur unnoticed to a Veeam backup server, so I would either do recovery verification, such as Surebackup or keep doing active fulls periodically.
Thanks!
There is still a probability of data corruption because of the storage-level corruption which can occur unnoticed to a Veeam backup server, so I would either do recovery verification, such as Surebackup or keep doing active fulls periodically.
Thanks!
-
- Expert
- Posts: 227
- Liked: 46 times
- Joined: Oct 12, 2015 11:24 pm
- Contact:
Re: Is there still a "need" for active fulls?
With integrity streams enabled I believe Gostev recommends monitoring for ReFS event 133 (the filesystem detected a checksum error) and to trigger active fulls via script in this instance
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Is there still a "need" for active fulls?
Hello,
how can there be corruption from the storage when the store uses DIF? Or do you mean firmware bugs?
Markus
how can there be corruption from the storage when the store uses DIF? Or do you mean firmware bugs?
Markus
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Is there still a "need" for active fulls?
There is a tiny probability of bug and data corruption on every stage of the data flow, including fiber, ports etc.
The probability is piddling, but still I would recommend to perform recoverability checks.
The probability is piddling, but still I would recommend to perform recoverability checks.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Is there still a "need" for active fulls?
To believe that technologies like DIF and integrity streams can protect you have to believe that backups corruption can only come from block level/bit rot type disk corruption, i.e. that no possible bugs exist in Veeam, the OS (filesystem and device drivers), the data source (i.e. CBT) and that nothing environmental can impact backup data (a BSOD at an inopportune time).
In the lab I've already seen corrupt backups on ReFS due to what appears to be a Windows issue that caused the system to BSOD during the synthetic fast clone operation. Admittedly in this case the old backups seemed OK, but the chain itself was unrecoverable and I had to run active fulls.
I will admit, if you use storage with DIF level protection, or ReFS integrity streams (assuming mirror/parity storage spaces), have offsite copies of your data, and run health checks/perform compacts, that you should be very, very safe, much safer than any technology previously used to store Veeam backups, however, it's still hard to argue that running the occasional active full wouldn't offer at least some additional benefit.
It really comes down to how risk averse you are. I've spent a lot of years planning for disaster and I know that what has saved me many times is being as risk averse as possible, because that makes you take the most conservative approach. Perhaps, as time moves on, and ReFS proves itself in the field to be truly resilient to bit rot and other issues (based on support cases), I'll feel better about it, but for now, I still think the occasional active full isn't a bad idea, even if it's not "needed".
In the lab I've already seen corrupt backups on ReFS due to what appears to be a Windows issue that caused the system to BSOD during the synthetic fast clone operation. Admittedly in this case the old backups seemed OK, but the chain itself was unrecoverable and I had to run active fulls.
I will admit, if you use storage with DIF level protection, or ReFS integrity streams (assuming mirror/parity storage spaces), have offsite copies of your data, and run health checks/perform compacts, that you should be very, very safe, much safer than any technology previously used to store Veeam backups, however, it's still hard to argue that running the occasional active full wouldn't offer at least some additional benefit.
It really comes down to how risk averse you are. I've spent a lot of years planning for disaster and I know that what has saved me many times is being as risk averse as possible, because that makes you take the most conservative approach. Perhaps, as time moves on, and ReFS proves itself in the field to be truly resilient to bit rot and other issues (based on support cases), I'll feel better about it, but for now, I still think the occasional active full isn't a bad idea, even if it's not "needed".
-
- Veteran
- Posts: 370
- Liked: 97 times
- Joined: Dec 13, 2015 11:33 pm
- Contact:
Re: Is there still a "need" for active fulls?
Tom, that bug was fixed in the latest MS update right? I had been playing with 64k and 4k clusters and that seemed to only affect 4k cluster drives, and actually basically destroyed my test 4k cluster drive while all my 64k drives have been fine. Not sure if it only affected 4k drives or was just much more likely too but like I said, I'm pretty sure that was fixed recently in any case (thankfully)tsightler wrote:In the lab I've already seen corrupt backups on ReFS due to what appears to be a Windows issue that caused the system to BSOD during the synthetic fast clone operation. Admittedly in this case the old backups seemed OK, but the chain itself was unrecoverable and I had to run active fulls.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Is there still a "need" for active fulls?
I not 100% sure as I've really seen 3 different issues that may all be related or perhaps not, however, like you, I haven't seen this impact any system with 64K cluster size, although perhaps if I was working with systems that were 16x larger, it could happen there as well (biggest system I currently have access to is about 100TB).
-
- Veteran
- Posts: 370
- Liked: 97 times
- Joined: Dec 13, 2015 11:33 pm
- Contact:
Re: Is there still a "need" for active fulls?
I've got 3 LUN's all about 64TB, all now with 64k clusters now after the pain of the crashes with the 4k cluster size (although I was leaning toward 64k clusters anyway)
Who is online
Users browsing this forum: Bing [Bot] and 41 guests