Comprehensive data protection for all workloads
Post Reply
tschw_ham
Service Provider
Posts: 3
Liked: never
Joined: Nov 08, 2017 2:06 pm
Contact:

Question about GFS Archive with ReFS

Post by tschw_ham » Feb 28, 2018 4:21 pm

Hello everybody,

I had a conversation with a customer where I recently installed a new backupserver. For long time retention of backup data I configured one repository on a ReFS volume and created a BackupCopy job with GFS settings to move data on that repository. Everything works as inteded. The full backups are created via FastClone. The cusomer asked, what would happen if this runs for a while and there is a data corruption on the initial backup which the following backups are referencing on? Is the whole chain of dependent full backups corrupted then?

I did not find anything related on the forum. Can sombody help me here?

Regards
Tobias

nmdange
Expert
Posts: 457
Liked: 107 times
Joined: Aug 20, 2015 9:30 pm
Contact:

Re: Question about GFS Archive with ReFS

Post by nmdange » Feb 28, 2018 8:58 pm

Yes if there is filesystem/hardware corruption in a block, then it will affect all files that share that block. That type of issue to also present when doing data deduplication so its not unique to ReFS fast clone.

If you run ReFS on storage spaces direct, then it can do self-healing when it detects corruption, but this is similar to what a good hardware RAID controller or storage array can do. Either way I see the risk of corruption as very low, but it is not zero so that is up to the customer to decide. The alternative is having more space (or using tape backup) to store fully independent copies, but then that means more $$$ of course!

tschw_ham
Service Provider
Posts: 3
Liked: never
Joined: Nov 08, 2017 2:06 pm
Contact:

Re: Question about GFS Archive with ReFS

Post by tschw_ham » Mar 05, 2018 8:05 am

Thanks for your answer.

Post Reply

Who is online

Users browsing this forum: Google [Bot] and 16 guests