Does NFS storage not support deduplication in Backup and Replication job?
We are considering using NFS or S3 for our repository. I have been testing with a small NFS mount and noticed that deduplication while turned on for the job does not seem to work. I have an NFS repository and a local disk repository. The "used space" for NFS is actually the difference between Capacity and Free space. The "used space" for the local disk repository is about 40% larger than the difference between Capacity and Free space. I could not find anywhere it said that NFS would not do deduplication. I figure S3 would be same deal as it is object based storage and not local or SAN attached disk.
-
- Enthusiast
- Posts: 67
- Liked: 11 times
- Joined: Feb 02, 2018 7:56 pm
- Full Name: Jason Mount
- Contact:
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: NFS and Deduplication
Hi Jason, how do you check those numbers? What kind of local repository is it - what is the file system, any chance it is using Windows deduplication? What are the related job settings (compression, storage optimization) and advanced repository settings in both cases? Are you backing up the same data to both?
-
- Enthusiast
- Posts: 67
- Liked: 11 times
- Joined: Feb 02, 2018 7:56 pm
- Full Name: Jason Mount
- Contact:
Re: NFS and Deduplication
I was just looking for validation that NFS repository does not due any job compression/de-duplication as I am seeing.
I am looking at the repository in Veeam Console (even did a rescan just to make sure). That is where I am getting my numbers.
Both backup jobs - Storage compression level is set to optimal
Both backup jobs - Storage Optimizations is set to local target
The file type is NFS on one repository and REFS on the other
I clone the job to make sure backing up the same data to both repositories.
I am looking at the repository in Veeam Console (even did a rescan just to make sure). That is where I am getting my numbers.
Both backup jobs - Storage compression level is set to optimal
Both backup jobs - Storage Optimizations is set to local target
The file type is NFS on one repository and REFS on the other
I clone the job to make sure backing up the same data to both repositories.
-
- Veteran
- Posts: 643
- Liked: 312 times
- Joined: Aug 04, 2019 2:57 pm
- Full Name: Harvey
- Contact:
Re: NFS and Deduplication
Hey Jason,
Are you seeing "fast clone" in your ReFS job?
ReFS does block cloning, and I suspect that's what you're seeing which causes the discrepancy here. Backups in a chain that write to a block clone available ReFS volume will do Allocate on Write and reference previously existing blocks if they are there instead of writing new blocks.
The Dedup that is mentioned by Veeam is not a file-system dedup it's an in-file dedup. . What lands on the repository with NFS is the backup file(s) after the block deduplication has already happened.
With ReFS, this can be taken a step further due to block-cloning, but as far as I know this isn't possible with NFS. You'd need an XFS volume with relinks enabled to do this with Linux, or some ZFS appliance I suppose backing the NFS share (can you do that? I've not messed with ZFS in years and wrote it off because the dedup was not great)
Almost certainly, this is the difference you're seeing, and It think maybe there's a slight misunderstanding on what the Veeam inline-dedup does.
Are you seeing "fast clone" in your ReFS job?
ReFS does block cloning, and I suspect that's what you're seeing which causes the discrepancy here. Backups in a chain that write to a block clone available ReFS volume will do Allocate on Write and reference previously existing blocks if they are there instead of writing new blocks.
The Dedup that is mentioned by Veeam is not a file-system dedup it's an in-file dedup. . What lands on the repository with NFS is the backup file(s) after the block deduplication has already happened.
With ReFS, this can be taken a step further due to block-cloning, but as far as I know this isn't possible with NFS. You'd need an XFS volume with relinks enabled to do this with Linux, or some ZFS appliance I suppose backing the NFS share (can you do that? I've not messed with ZFS in years and wrote it off because the dedup was not great)
Almost certainly, this is the difference you're seeing, and It think maybe there's a slight misunderstanding on what the Veeam inline-dedup does.
Who is online
Users browsing this forum: Amazon [Bot] and 66 guests