Comprehensive data protection for all workloads
Battlestorm
Veeam ProPartner
Posts: 21
Liked: 2 times
Joined: Feb 21, 2014 10:49 am
Full Name: Daniel Ely
Location: London, UK
Contact:

Error: related to file system limitation?

Post by Battlestorm »

Morning Guys,
We have just enabled de duplication on a Windows 2012 r2 repository and we are now getting the following error for one of our backups.

15/04/2014 08:19:01 :: Error: Client error: The requested operation could not be completed due to a file system limitation
Failed to flush file buffers. File: [D:\VeeamBackups\Temp Backup Job\Temp Backup Job2014-03-29T220137.vbk].

So fairly safe assumption that it's from enabling dedupe, the file in question is 4TB so way over the recommended 1TB file limit for Windows dedupe.

So the questions;

1) has anybody else had this error with large vbk's and what is the largest you have on a deduped repository?
2) what is the best way to mitigate this issue? I'm thinking just needing to separate the job into smaller chunks and is there a registry key to allow Veeam to split the vbk into smaller chunks?

regards,
Daniel
foggy
Veeam Software
Posts: 21133
Liked: 2139 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Error: related to file system limitation?

Post by foggy »

Daniel, there's no such a registry key, you can split the job only manually, adding fewer VMs into several jobs. I also suggest to open a support case for investigation. Thanks.
henrym
Lurker
Posts: 1
Liked: never
Joined: Apr 28, 2014 7:20 am
Location: Germany
Contact:

Re: Error: related to file system limitation?

Post by henrym »

Hello

we have the same problem. All jobs work perfectly but two which have backup files at about 1.6 TB size. They used to work but stopped working some days ago.

Is there a already solution for this problem?

Regards
Henry
Vitaliy S.
VP, Product Management
Posts: 27340
Liked: 2782 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Error: related to file system limitation?

Post by Vitaliy S. »

Cannot check for the solution, since OP hasn't mentioned his support case ID...
JZTOR
Service Provider
Posts: 14
Liked: never
Joined: Nov 29, 2010 2:38 pm
Full Name: Joseph Zinguer
Contact:

Re: Error: related to file system limitation?

Post by JZTOR »

Just got the same error with Veeam 7 (patch 3). It was working fine for last few months. 10 VMs, total used space 2 TB, target - Windows 2012 R2 with deduplication enabled. The other jobs on the same server to the same target are not affected.
veremin
Product Manager
Posts: 20354
Liked: 2286 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Error: related to file system limitation?

Post by veremin »

As mentioned in the recent community digest, this issue might be related to the heavily fragmented backup file:
Gostev wrote:This issue with Windows-based backup repositories comes up in our support and on the forums quite often. Backups start failing with "The requested operation could not be completed due to a file system limitation" or "Insufficient system resources exist to complete the requested service". This is NTFS issue hits large fragmented backup files > A heavily fragmented file in an NTFS volume may not grow beyond a certain size. Note that you need to format the volume after installing the hotfix with Format <Drive:> /FS:NTFS /L. The issue is addressed in Windows Server 2012 and Windows 8, however formatting the volume may still be necessary.
If you feel that isn't the case, please, open a ticket with our support team and let them confirm your environment.

Thanks.
namiko78
Expert
Posts: 117
Liked: 4 times
Joined: Mar 03, 2011 1:49 pm
Full Name: Steven Stirling
Contact:

Re: Error: related to file system limitation?

Post by namiko78 »

I'm having similar issues, but running 2012 R2 - it appears I don't need the patch, but i still need to format with the /L for large size records?

Will defragging help at this point or is the backup file corrupted?
foggy
Veeam Software
Posts: 21133
Liked: 2139 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Error: related to file system limitation?

Post by foggy »

Hard to say without reviewing the log files. You can try defragmentation and contact support in case it does not help (or just contact them immediately to confirm the issue first).
namiko78
Expert
Posts: 117
Liked: 4 times
Joined: Mar 03, 2011 1:49 pm
Full Name: Steven Stirling
Contact:

Re: Error: related to file system limitation?

Post by namiko78 »

Just waiting for them to get back to me.
Have included the logs also, thanks
foggy
Veeam Software
Posts: 21133
Liked: 2139 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Error: related to file system limitation?

Post by foggy »

Your support case ID posted here would help us in tracking this issue for future readers... Thanks!
namiko78
Expert
Posts: 117
Liked: 4 times
Joined: Mar 03, 2011 1:49 pm
Full Name: Steven Stirling
Contact:

Re: Error: related to file system limitation?

Post by namiko78 » 1 person likes this post

Case # 00587092
hans_lenze
Service Provider
Posts: 17
Liked: 4 times
Joined: Sep 07, 2012 7:07 am
Contact:

Re: Error: related to file system limitation?

Post by hans_lenze »

Ran into this issue too.
http://www.veeam.com/kb1893
Support case #00589516

I've stopped all the Veeam services and I'm copying all the backup files of the repository partition.
When everything's copied I'll reformat the drive with the /L flag and copy all the files back.

We'll see how it goes.
hans_lenze
Service Provider
Posts: 17
Liked: 4 times
Joined: Sep 07, 2012 7:07 am
Contact:

Re: Error: related to file system limitation?

Post by hans_lenze » 1 person likes this post

It works, just don't forget to select the right NTFS cluster size when formatting the volume using the command line. If you do forget, you'll have to reformat again...

In my case (18TB repository) the correct command was: format D: /L /Q /FS:NTFS /A:8192

See the following link for the correct cluster size for a give size volume: http://support.microsoft.com/kb/140365/en
StrangeWill
Influencer
Posts: 24
Liked: 1 time
Joined: Aug 15, 2013 4:12 pm
Full Name: William Roush
Contact:

Re: Error: related to file system limitation?

Post by StrangeWill »

Anyone experience where formatting with "/L" DOESN'T fix it?
foggy
Veeam Software
Posts: 21133
Liked: 2139 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Error: related to file system limitation?

Post by foggy »

Contacting support directly could be more effective than waiting for other users feedback here. Have you already done that?
tsightler
VP, Product Management
Posts: 6029
Liked: 2856 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Error: related to file system limitation?

Post by tsightler »

It's certainly possible to still hit the limit even with /L. Using the /L parameter is telling the system to format the NTFS volume with large file records. The default size for a file record is 1K and with this flag they are 4K, which means you're roughtly increasing the number of fragments allowed for a large file by 4x. I could certainly still see hitting this limit easily when using Windows 2012 dedupe, especially with very large files. I'd be somewhat surprised if it was hit on a "normal" filesystem though, although still not impossible with very large files on a very large filesystem.

I personally recommend using larger cluster sizes (larger than the default) as this can help to avoid excessive fragmentation since the file will be layed out in larger chunks. I can find very little reason not to use 64K allocation units for Veeam repositories in pretty much all cases.
StrangeWill
Influencer
Posts: 24
Liked: 1 time
Joined: Aug 15, 2013 4:12 pm
Full Name: William Roush
Contact:

Re: Error: related to file system limitation?

Post by StrangeWill »

foggy wrote:Contacting support directly could be more effective than waiting for other users feedback here. Have you already done that?
It seemed to be caused by setting the dedupe age to "0", setting it to "1" fixed it.
TheJourney
Influencer
Posts: 22
Liked: 4 times
Joined: Dec 10, 2009 8:44 pm
Full Name: Sam Journagan
Contact:

Re: Error: related to file system limitation?

Post by TheJourney »

So I'm going with: format J: /L /Q /FS:NTFS /A:64K

for my J: drive. Omit the Q if you get paid by the hour :lol:
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Error: related to file system limitation?

Post by Delo123 »

Sadly we are currently testing Acronis for the dumb silly reason that Veeam cannot (does not want to) split backup files in smaller chunks.
I "hope" we will run into some issues with Acronis since i do not want to lose our beloved Veeam :(
Vitaliy S.
VP, Product Management
Posts: 27340
Liked: 2782 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Error: related to file system limitation?

Post by Vitaliy S. »

Hi Guido,

Just want to make sure we are on the same page with this - it is not something that we do not want to do, but there are certain features that have/had higher priority bringing more value to existing and new Veeam users.

Can you please tell me why re-configuring backup job with smaller amount of VMs does not work in your case?

Thanks!
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Error: related to file system limitation?

Post by Delo123 »

Hi Vitaliy,

We have quite some big VM's (2-4TB) which are mainly big databases, servers with legacy applications and also applications with licensing issues where data cannot be distributed between multiple servers.
Also we like to at least group some VM's together in backupjobs to get some dedupe for the primary repository and also speeding them up...

And thanks for your comment, as i understood it earlier threads you/veeam didn't see the benefit of smaller files (vs. a bit more admin overhead), my bad...

And thx for picking this up!
b.vanhaastrecht
Service Provider
Posts: 865
Liked: 160 times
Joined: Aug 26, 2013 7:46 am
Full Name: Bastiaan van Haastrecht
Location: The Netherlands
Contact:

Re: Error: related to file system limitation?

Post by b.vanhaastrecht » 1 person likes this post

Hi,

We are seeing this error months afte formatting the volume /L, the FileRecord segment is 4096:

Code: Select all

NTFS Volume Serial Number :       0xa41aee3e1aee0cdc
NTFS Version   :                  3.1
LFS Version    :                  2.0
Number Sectors :                  0x000000105fb96fff
Total Clusters :                  0x0000000020bf72df
Free Clusters  :                  0x000000000446c286
Total Reserved :                  0x0000000000000030
Bytes Per Sector  :               512
Bytes Per Physical Sector :       512
Bytes Per Cluster :               65536
Bytes Per FileRecord Segment    : 4096     <==== /L did it's job setting it from 1024 to 4096
Clusters Per FileRecord Segment : 0
Mft Valid Data Length :           0x000000000db00000
Mft Start Lcn  :                  0x000000000000c000
Mft2 Start Lcn :                  0x0000000000000001
Mft Zone Start :                  0x0000000003ceed80
Mft Zone End   :                  0x0000000003cefa20
Resource Manager Identifier :     E2FE0971-C413-11E4-80C5-9CB6548CAF1D
And we get NTFS errors in the system event log:

Code: Select all

{Delayed Write Failed} Windows was unable to save all the data for the file F:\some.vbk; the data has been lost. This error may be caused if the device has been removed or the media is write-protected.
So this means the file is more then 4096 fragmented in the dedupe, and now cant allow writes anymore. It's a shame MS has not set a limit on dedupping to avaid this.

So be warned, increasing the index does not mean you won't end up with a bogus file. I think this warning should be added in http://www.veeam.com/kb1893 .
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
foggy
Veeam Software
Posts: 21133
Liked: 2139 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Error: related to file system limitation?

Post by foggy »

KB updated, thanks for the heads up!
tsightler
VP, Product Management
Posts: 6029
Liked: 2856 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Error: related to file system limitation?

Post by tsightler »

b.vanhaastrecht wrote:So this means the file is more then 4096 fragmented in the dedupe, and now cant allow writes anymore. It's a shame MS has not set a limit on dedupping to avaid this.
I mentioned in a post above that it would still be possible to hit this limit and that increasing this only makes in less likely, however, you are the first person I've seen to actually hit the limit so I appreciate you sharing that. I'm wondering if you could provide any detail about the size of your backup file?
b.vanhaastrecht
Service Provider
Posts: 865
Liked: 160 times
Joined: Aug 26, 2013 7:46 am
Full Name: Bastiaan van Haastrecht
Location: The Netherlands
Contact:

Re: Error: related to file system limitation?

Post by b.vanhaastrecht »

Well, it was a slight surprise to us. We had dedupping running on a repository where forward and reversed backup file are stored in. We had excluded the reversed incremental folder, but somehow this exclude didn't got applied on this particular file. We think the reversed file was already in dedupe policy, and we didnt notice this.

So, it was a reversed incremental file of 1.5TB. After about 3 months daily backup the index of 4096 wasn't enough anymore.
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
hans_lenze
Service Provider
Posts: 17
Liked: 4 times
Joined: Sep 07, 2012 7:07 am
Contact:

Re: Error: related to file system limitation?

Post by hans_lenze »

Did you change the priority optimization to allow immediate defragmentation of large files as per https://technet.microsoft.com/en-us/lib ... 91438.aspx?
Tune performance for large scale operations—Run the following PowerShell script to:

Disable additional processing and I/O when deep garbage collection runs

Reserve additional memory for hash processing

Enable priority optimization to allow immediate defragmentation of large files

Set-ItemProperty -Path HKLM:\Cluster\Dedup -Name HashIndexFullKeyReservationPercent -Value 70
Set-ItemProperty -Path HKLM:\Cluster\Dedup -Name EnablePriorityOptimization -Value 1

These settings modify the following:

HashIndexFullKeyReservationPercent: This value controls how much of the optimization job memory is used for existing chunk hashes, versus new chunk hashes. At high scale, 70% results in better optimization throughput than the 50% default.

EnablePriorityOptimization: With files approaching 1TB, fragmentation of a single file can accumulate enough fragments to approach the per file limit. Optimization processing consolidates these fragments and prevents this limit from being reached. By setting this registry key, dedup will add an additional process to deal with highly fragmented deduped files with high priority.
b.vanhaastrecht
Service Provider
Posts: 865
Liked: 160 times
Joined: Aug 26, 2013 7:46 am
Full Name: Bastiaan van Haastrecht
Location: The Netherlands
Contact:

Re: Error: related to file system limitation?

Post by b.vanhaastrecht »

No, wasn't aware of this option. Looks like this does not prevent it, but it will optimize these large files with higher priority, so the change to hit index limit is less high, but still possible.
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
Battlestorm
Veeam ProPartner
Posts: 21
Liked: 2 times
Joined: Feb 21, 2014 10:49 am
Full Name: Daniel Ely
Location: London, UK
Contact:

Re: Error: related to file system limitation?

Post by Battlestorm »

The article only seems to give the reg keys for a cluster setup, searching for HashIndexFullKeyReservationPercent or EnablePriorityOptimization only find articles with the same keys for a cluster environment.
The article does also talk about another key DeepGCInterval which does showup on some google searches as also living HKLM\System\CurrentControlSet\Services\ddpsvc\Settings my guess if that the above 2 keys can also be placed there, does anyone have any experience?
rnt-guy
Enthusiast
Posts: 61
Liked: 1 time
Joined: Feb 04, 2016 12:58 pm
Full Name: RNT-Guy
Contact:

Re: Error: related to file system limitation?

Post by rnt-guy »

b.vanhaastrecht wrote:So, it was a reversed incremental file of 1.5TB. After about 3 months daily backup the index of 4096 wasn't enough anymore.
I'm confused, so should I be setting the cluster size to something larger? to What? Our largest file is 2TB (database image file). Should I use 8192 to be safe and just eat the disk space savings it loses?

Also wondering if I there's any reason I can't just create another volume formatted correctly and copy the files there and then point veeam to that location? Will that constitute a reseed process instead of moving it back?
foggy
Veeam Software
Posts: 21133
Liked: 2139 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Error: related to file system limitation?

Post by foggy »

Another option is to periodically compact backup file to defragment it.
Post Reply

Who is online

Users browsing this forum: Google [Bot], Semrush [Bot] and 179 guests