Comprehensive data protection for all workloads
Post Reply
tschw_ham
Service Provider
Posts: 10
Liked: never
Joined: Nov 08, 2017 2:06 pm
Contact:

Question about GFS Archive with ReFS

Post by tschw_ham »

Hello everybody,

I had a conversation with a customer where I recently installed a new backupserver. For long time retention of backup data I configured one repository on a ReFS volume and created a BackupCopy job with GFS settings to move data on that repository. Everything works as inteded. The full backups are created via FastClone. The cusomer asked, what would happen if this runs for a while and there is a data corruption on the initial backup which the following backups are referencing on? Is the whole chain of dependent full backups corrupted then?

I did not find anything related on the forum. Can sombody help me here?

Regards
Tobias
nmdange
Veteran
Posts: 527
Liked: 142 times
Joined: Aug 20, 2015 9:30 pm
Contact:

Re: Question about GFS Archive with ReFS

Post by nmdange »

Yes if there is filesystem/hardware corruption in a block, then it will affect all files that share that block. That type of issue to also present when doing data deduplication so its not unique to ReFS fast clone.

If you run ReFS on storage spaces direct, then it can do self-healing when it detects corruption, but this is similar to what a good hardware RAID controller or storage array can do. Either way I see the risk of corruption as very low, but it is not zero so that is up to the customer to decide. The alternative is having more space (or using tape backup) to store fully independent copies, but then that means more $$$ of course!
tschw_ham
Service Provider
Posts: 10
Liked: never
Joined: Nov 08, 2017 2:06 pm
Contact:

Re: Question about GFS Archive with ReFS

Post by tschw_ham »

Thanks for your answer.
Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 254 guests