Error: related to file system limitation?

Availability for the Always-On Enterprise

Re: Error: related to file system limitation?

Veeam Logoby tsightler » Thu Aug 14, 2014 5:24 pm

It's certainly possible to still hit the limit even with /L. Using the /L parameter is telling the system to format the NTFS volume with large file records. The default size for a file record is 1K and with this flag they are 4K, which means you're roughtly increasing the number of fragments allowed for a large file by 4x. I could certainly still see hitting this limit easily when using Windows 2012 dedupe, especially with very large files. I'd be somewhat surprised if it was hit on a "normal" filesystem though, although still not impossible with very large files on a very large filesystem.

I personally recommend using larger cluster sizes (larger than the default) as this can help to avoid excessive fragmentation since the file will be layed out in larger chunks. I can find very little reason not to use 64K allocation units for Veeam repositories in pretty much all cases.
tsightler
Veeam Software
 
Posts: 4843
Liked: 1787 times
Joined: Fri Jun 05, 2009 12:57 pm
Full Name: Tom Sightler

Re: Error: related to file system limitation?

Veeam Logoby StrangeWill » Fri Oct 03, 2014 7:51 pm

foggy wrote:Contacting support directly could be more effective than waiting for other users feedback here. Have you already done that?


It seemed to be caused by setting the dedupe age to "0", setting it to "1" fixed it.
StrangeWill
Influencer
 
Posts: 24
Liked: 1 time
Joined: Thu Aug 15, 2013 4:12 pm
Full Name: William Roush

Re: Error: related to file system limitation?

Veeam Logoby TheJourney » Thu Jan 08, 2015 10:14 pm

So I'm going with: format J: /L /Q /FS:NTFS /A:64K

for my J: drive. Omit the Q if you get paid by the hour :lol:
TheJourney
Influencer
 
Posts: 13
Liked: 2 times
Joined: Thu Dec 10, 2009 8:44 pm
Full Name: Sam Journagan

Re: Error: related to file system limitation?

Veeam Logoby Delo123 » Fri Jan 09, 2015 8:31 am

Sadly we are currently testing Acronis for the dumb silly reason that Veeam cannot (does not want to) split backup files in smaller chunks.
I "hope" we will run into some issues with Acronis since i do not want to lose our beloved Veeam :(
Delo123
Expert
 
Posts: 351
Liked: 101 times
Joined: Fri Dec 28, 2012 5:20 pm
Full Name: Guido Meijers

Re: Error: related to file system limitation?

Veeam Logoby Vitaliy S. » Fri Jan 09, 2015 12:00 pm

Hi Guido,

Just want to make sure we are on the same page with this - it is not something that we do not want to do, but there are certain features that have/had higher priority bringing more value to existing and new Veeam users.

Can you please tell me why re-configuring backup job with smaller amount of VMs does not work in your case?

Thanks!
Vitaliy S.
Veeam Software
 
Posts: 19773
Liked: 1120 times
Joined: Mon Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov

Re: Error: related to file system limitation?

Veeam Logoby Delo123 » Fri Jan 09, 2015 9:34 pm

Hi Vitaliy,

We have quite some big VM's (2-4TB) which are mainly big databases, servers with legacy applications and also applications with licensing issues where data cannot be distributed between multiple servers.
Also we like to at least group some VM's together in backupjobs to get some dedupe for the primary repository and also speeding them up...

And thanks for your comment, as i understood it earlier threads you/veeam didn't see the benefit of smaller files (vs. a bit more admin overhead), my bad...

And thx for picking this up!
Delo123
Expert
 
Posts: 351
Liked: 101 times
Joined: Fri Dec 28, 2012 5:20 pm
Full Name: Guido Meijers

Re: Error: related to file system limitation?

Veeam Logoby b.vanhaastrecht » Tue Jun 02, 2015 6:32 am 1 person likes this post

Hi,

We are seeing this error months afte formatting the volume /L, the FileRecord segment is 4096:
Code: Select all
NTFS Volume Serial Number :       0xa41aee3e1aee0cdc
NTFS Version   :                  3.1
LFS Version    :                  2.0
Number Sectors :                  0x000000105fb96fff
Total Clusters :                  0x0000000020bf72df
Free Clusters  :                  0x000000000446c286
Total Reserved :                  0x0000000000000030
Bytes Per Sector  :               512
Bytes Per Physical Sector :       512
Bytes Per Cluster :               65536
Bytes Per FileRecord Segment    : 4096     <==== /L did it's job setting it from 1024 to 4096
Clusters Per FileRecord Segment : 0
Mft Valid Data Length :           0x000000000db00000
Mft Start Lcn  :                  0x000000000000c000
Mft2 Start Lcn :                  0x0000000000000001
Mft Zone Start :                  0x0000000003ceed80
Mft Zone End   :                  0x0000000003cefa20
Resource Manager Identifier :     E2FE0971-C413-11E4-80C5-9CB6548CAF1D


And we get NTFS errors in the system event log:
Code: Select all
{Delayed Write Failed} Windows was unable to save all the data for the file F:\some.vbk; the data has been lost. This error may be caused if the device has been removed or the media is write-protected.


So this means the file is more then 4096 fragmented in the dedupe, and now cant allow writes anymore. It's a shame MS has not set a limit on dedupping to avaid this.

So be warned, increasing the index does not mean you won't end up with a bogus file. I think this warning should be added in http://www.veeam.com/kb1893 .
========================================
Veeam ProPartner and Cloud Connect Provider
b.vanhaastrecht
Service Provider
 
Posts: 338
Liked: 67 times
Joined: Mon Aug 26, 2013 7:46 am
Location: The Netherlands
Full Name: Bastiaan van Haastrecht

Re: Error: related to file system limitation?

Veeam Logoby foggy » Wed Jun 03, 2015 11:00 am

KB updated, thanks for the heads up!
foggy
Veeam Software
 
Posts: 15087
Liked: 1110 times
Joined: Mon Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson

Re: Error: related to file system limitation?

Veeam Logoby tsightler » Wed Jun 03, 2015 9:40 pm

b.vanhaastrecht wrote:So this means the file is more then 4096 fragmented in the dedupe, and now cant allow writes anymore. It's a shame MS has not set a limit on dedupping to avaid this.

I mentioned in a post above that it would still be possible to hit this limit and that increasing this only makes in less likely, however, you are the first person I've seen to actually hit the limit so I appreciate you sharing that. I'm wondering if you could provide any detail about the size of your backup file?
tsightler
Veeam Software
 
Posts: 4843
Liked: 1787 times
Joined: Fri Jun 05, 2009 12:57 pm
Full Name: Tom Sightler

Re: Error: related to file system limitation?

Veeam Logoby b.vanhaastrecht » Mon Jun 08, 2015 11:49 am

Well, it was a slight surprise to us. We had dedupping running on a repository where forward and reversed backup file are stored in. We had excluded the reversed incremental folder, but somehow this exclude didn't got applied on this particular file. We think the reversed file was already in dedupe policy, and we didnt notice this.

So, it was a reversed incremental file of 1.5TB. After about 3 months daily backup the index of 4096 wasn't enough anymore.
========================================
Veeam ProPartner and Cloud Connect Provider
b.vanhaastrecht
Service Provider
 
Posts: 338
Liked: 67 times
Joined: Mon Aug 26, 2013 7:46 am
Location: The Netherlands
Full Name: Bastiaan van Haastrecht

Re: Error: related to file system limitation?

Veeam Logoby hans_lenze » Mon Jun 08, 2015 6:42 pm

Did you change the priority optimization to allow immediate defragmentation of large files as per https://technet.microsoft.com/en-us/lib ... 91438.aspx?

Tune performance for large scale operations—Run the following PowerShell script to:

Disable additional processing and I/O when deep garbage collection runs

Reserve additional memory for hash processing

Enable priority optimization to allow immediate defragmentation of large files

Set-ItemProperty -Path HKLM:\Cluster\Dedup -Name HashIndexFullKeyReservationPercent -Value 70
Set-ItemProperty -Path HKLM:\Cluster\Dedup -Name EnablePriorityOptimization -Value 1

These settings modify the following:

HashIndexFullKeyReservationPercent: This value controls how much of the optimization job memory is used for existing chunk hashes, versus new chunk hashes. At high scale, 70% results in better optimization throughput than the 50% default.

EnablePriorityOptimization: With files approaching 1TB, fragmentation of a single file can accumulate enough fragments to approach the per file limit. Optimization processing consolidates these fragments and prevents this limit from being reached. By setting this registry key, dedup will add an additional process to deal with highly fragmented deduped files with high priority.
hans_lenze
Service Provider
 
Posts: 16
Liked: 4 times
Joined: Fri Sep 07, 2012 7:07 am

Re: Error: related to file system limitation?

Veeam Logoby b.vanhaastrecht » Fri Jun 12, 2015 2:23 pm

No, wasn't aware of this option. Looks like this does not prevent it, but it will optimize these large files with higher priority, so the change to hit index limit is less high, but still possible.
========================================
Veeam ProPartner and Cloud Connect Provider
b.vanhaastrecht
Service Provider
 
Posts: 338
Liked: 67 times
Joined: Mon Aug 26, 2013 7:46 am
Location: The Netherlands
Full Name: Bastiaan van Haastrecht

Re: Error: related to file system limitation?

Veeam Logoby Battlestorm » Tue Jun 16, 2015 3:50 pm

The article only seems to give the reg keys for a cluster setup, searching for HashIndexFullKeyReservationPercent or EnablePriorityOptimization only find articles with the same keys for a cluster environment.
The article does also talk about another key DeepGCInterval which does showup on some google searches as also living HKLM\System\CurrentControlSet\Services\ddpsvc\Settings my guess if that the above 2 keys can also be placed there, does anyone have any experience?
Battlestorm
Veeam ProPartner
 
Posts: 20
Liked: 2 times
Joined: Fri Feb 21, 2014 10:49 am
Location: London, UK
Full Name: Daniel Ely

Re: Error: related to file system limitation?

Veeam Logoby rnt-guy » Tue Apr 26, 2016 1:46 pm

b.vanhaastrecht wrote:So, it was a reversed incremental file of 1.5TB. After about 3 months daily backup the index of 4096 wasn't enough anymore.


I'm confused, so should I be setting the cluster size to something larger? to What? Our largest file is 2TB (database image file). Should I use 8192 to be safe and just eat the disk space savings it loses?

Also wondering if I there's any reason I can't just create another volume formatted correctly and copy the files there and then point veeam to that location? Will that constitute a reseed process instead of moving it back?
rnt-guy
Service Provider
 
Posts: 61
Liked: 1 time
Joined: Thu Feb 04, 2016 12:58 pm
Full Name: RNT-Guy

Re: Error: related to file system limitation?

Veeam Logoby foggy » Thu Apr 28, 2016 4:25 pm

Another option is to periodically compact backup file to defragment it.
foggy
Veeam Software
 
Posts: 15087
Liked: 1110 times
Joined: Mon Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson

PreviousNext

Return to Veeam Backup & Replication



Who is online

Users browsing this forum: Bing [Bot], Google [Bot] and 33 guests