- Service Provider
- Posts: 3
- Liked: never
- Joined: Nov 08, 2017 2:06 pm
I had a conversation with a customer where I recently installed a new backupserver. For long time retention of backup data I configured one repository on a ReFS volume and created a BackupCopy job with GFS settings to move data on that repository. Everything works as inteded. The full backups are created via FastClone. The cusomer asked, what would happen if this runs for a while and there is a data corruption on the initial backup which the following backups are referencing on? Is the whole chain of dependent full backups corrupted then?
I did not find anything related on the forum. Can sombody help me here?
- Posts: 444
- Liked: 101 times
- Joined: Aug 20, 2015 9:30 pm
If you run ReFS on storage spaces direct, then it can do self-healing when it detects corruption, but this is similar to what a good hardware RAID controller or storage array can do. Either way I see the risk of corruption as very low, but it is not zero so that is up to the customer to decide. The alternative is having more space (or using tape backup) to store fully independent copies, but then that means more $$$ of course!
Users browsing this forum: Google [Bot] and 39 guests