-
- Veeam ProPartner
- Posts: 21
- Liked: 2 times
- Joined: Feb 21, 2014 10:49 am
- Full Name: Daniel Ely
- Location: London, UK
- Contact:
Error: related to file system limitation?
Morning Guys,
We have just enabled de duplication on a Windows 2012 r2 repository and we are now getting the following error for one of our backups.
15/04/2014 08:19:01 :: Error: Client error: The requested operation could not be completed due to a file system limitation
Failed to flush file buffers. File: [D:\VeeamBackups\Temp Backup Job\Temp Backup Job2014-03-29T220137.vbk].
So fairly safe assumption that it's from enabling dedupe, the file in question is 4TB so way over the recommended 1TB file limit for Windows dedupe.
So the questions;
1) has anybody else had this error with large vbk's and what is the largest you have on a deduped repository?
2) what is the best way to mitigate this issue? I'm thinking just needing to separate the job into smaller chunks and is there a registry key to allow Veeam to split the vbk into smaller chunks?
regards,
Daniel
We have just enabled de duplication on a Windows 2012 r2 repository and we are now getting the following error for one of our backups.
15/04/2014 08:19:01 :: Error: Client error: The requested operation could not be completed due to a file system limitation
Failed to flush file buffers. File: [D:\VeeamBackups\Temp Backup Job\Temp Backup Job2014-03-29T220137.vbk].
So fairly safe assumption that it's from enabling dedupe, the file in question is 4TB so way over the recommended 1TB file limit for Windows dedupe.
So the questions;
1) has anybody else had this error with large vbk's and what is the largest you have on a deduped repository?
2) what is the best way to mitigate this issue? I'm thinking just needing to separate the job into smaller chunks and is there a registry key to allow Veeam to split the vbk into smaller chunks?
regards,
Daniel
-
- Veeam Software
- Posts: 21133
- Liked: 2139 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Error: related to file system limitation?
Daniel, there's no such a registry key, you can split the job only manually, adding fewer VMs into several jobs. I also suggest to open a support case for investigation. Thanks.
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Apr 28, 2014 7:20 am
- Location: Germany
- Contact:
Re: Error: related to file system limitation?
Hello
we have the same problem. All jobs work perfectly but two which have backup files at about 1.6 TB size. They used to work but stopped working some days ago.
Is there a already solution for this problem?
Regards
Henry
we have the same problem. All jobs work perfectly but two which have backup files at about 1.6 TB size. They used to work but stopped working some days ago.
Is there a already solution for this problem?
Regards
Henry
-
- VP, Product Management
- Posts: 27340
- Liked: 2782 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Error: related to file system limitation?
Cannot check for the solution, since OP hasn't mentioned his support case ID...
-
- Service Provider
- Posts: 14
- Liked: never
- Joined: Nov 29, 2010 2:38 pm
- Full Name: Joseph Zinguer
- Contact:
Re: Error: related to file system limitation?
Just got the same error with Veeam 7 (patch 3). It was working fine for last few months. 10 VMs, total used space 2 TB, target - Windows 2012 R2 with deduplication enabled. The other jobs on the same server to the same target are not affected.
-
- Product Manager
- Posts: 20354
- Liked: 2286 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Error: related to file system limitation?
As mentioned in the recent community digest, this issue might be related to the heavily fragmented backup file:
Thanks.
If you feel that isn't the case, please, open a ticket with our support team and let them confirm your environment.Gostev wrote:This issue with Windows-based backup repositories comes up in our support and on the forums quite often. Backups start failing with "The requested operation could not be completed due to a file system limitation" or "Insufficient system resources exist to complete the requested service". This is NTFS issue hits large fragmented backup files > A heavily fragmented file in an NTFS volume may not grow beyond a certain size. Note that you need to format the volume after installing the hotfix with Format <Drive:> /FS:NTFS /L. The issue is addressed in Windows Server 2012 and Windows 8, however formatting the volume may still be necessary.
Thanks.
-
- Expert
- Posts: 117
- Liked: 4 times
- Joined: Mar 03, 2011 1:49 pm
- Full Name: Steven Stirling
- Contact:
Re: Error: related to file system limitation?
I'm having similar issues, but running 2012 R2 - it appears I don't need the patch, but i still need to format with the /L for large size records?
Will defragging help at this point or is the backup file corrupted?
Will defragging help at this point or is the backup file corrupted?
-
- Veeam Software
- Posts: 21133
- Liked: 2139 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Error: related to file system limitation?
Hard to say without reviewing the log files. You can try defragmentation and contact support in case it does not help (or just contact them immediately to confirm the issue first).
-
- Expert
- Posts: 117
- Liked: 4 times
- Joined: Mar 03, 2011 1:49 pm
- Full Name: Steven Stirling
- Contact:
Re: Error: related to file system limitation?
Just waiting for them to get back to me.
Have included the logs also, thanks
Have included the logs also, thanks
-
- Veeam Software
- Posts: 21133
- Liked: 2139 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Error: related to file system limitation?
Your support case ID posted here would help us in tracking this issue for future readers... Thanks!
-
- Expert
- Posts: 117
- Liked: 4 times
- Joined: Mar 03, 2011 1:49 pm
- Full Name: Steven Stirling
- Contact:
Re: Error: related to file system limitation?
Case # 00587092
-
- Service Provider
- Posts: 17
- Liked: 4 times
- Joined: Sep 07, 2012 7:07 am
- Contact:
Re: Error: related to file system limitation?
Ran into this issue too.
http://www.veeam.com/kb1893
Support case #00589516
I've stopped all the Veeam services and I'm copying all the backup files of the repository partition.
When everything's copied I'll reformat the drive with the /L flag and copy all the files back.
We'll see how it goes.
http://www.veeam.com/kb1893
Support case #00589516
I've stopped all the Veeam services and I'm copying all the backup files of the repository partition.
When everything's copied I'll reformat the drive with the /L flag and copy all the files back.
We'll see how it goes.
-
- Service Provider
- Posts: 17
- Liked: 4 times
- Joined: Sep 07, 2012 7:07 am
- Contact:
Re: Error: related to file system limitation?
It works, just don't forget to select the right NTFS cluster size when formatting the volume using the command line. If you do forget, you'll have to reformat again...
In my case (18TB repository) the correct command was: format D: /L /Q /FS:NTFS /A:8192
See the following link for the correct cluster size for a give size volume: http://support.microsoft.com/kb/140365/en
In my case (18TB repository) the correct command was: format D: /L /Q /FS:NTFS /A:8192
See the following link for the correct cluster size for a give size volume: http://support.microsoft.com/kb/140365/en
-
- Influencer
- Posts: 24
- Liked: 1 time
- Joined: Aug 15, 2013 4:12 pm
- Full Name: William Roush
- Contact:
Re: Error: related to file system limitation?
Anyone experience where formatting with "/L" DOESN'T fix it?
-
- Veeam Software
- Posts: 21133
- Liked: 2139 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Error: related to file system limitation?
Contacting support directly could be more effective than waiting for other users feedback here. Have you already done that?
-
- VP, Product Management
- Posts: 6029
- Liked: 2856 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Error: related to file system limitation?
It's certainly possible to still hit the limit even with /L. Using the /L parameter is telling the system to format the NTFS volume with large file records. The default size for a file record is 1K and with this flag they are 4K, which means you're roughtly increasing the number of fragments allowed for a large file by 4x. I could certainly still see hitting this limit easily when using Windows 2012 dedupe, especially with very large files. I'd be somewhat surprised if it was hit on a "normal" filesystem though, although still not impossible with very large files on a very large filesystem.
I personally recommend using larger cluster sizes (larger than the default) as this can help to avoid excessive fragmentation since the file will be layed out in larger chunks. I can find very little reason not to use 64K allocation units for Veeam repositories in pretty much all cases.
I personally recommend using larger cluster sizes (larger than the default) as this can help to avoid excessive fragmentation since the file will be layed out in larger chunks. I can find very little reason not to use 64K allocation units for Veeam repositories in pretty much all cases.
-
- Influencer
- Posts: 24
- Liked: 1 time
- Joined: Aug 15, 2013 4:12 pm
- Full Name: William Roush
- Contact:
Re: Error: related to file system limitation?
It seemed to be caused by setting the dedupe age to "0", setting it to "1" fixed it.foggy wrote:Contacting support directly could be more effective than waiting for other users feedback here. Have you already done that?
-
- Influencer
- Posts: 22
- Liked: 4 times
- Joined: Dec 10, 2009 8:44 pm
- Full Name: Sam Journagan
- Contact:
Re: Error: related to file system limitation?
So I'm going with: format J: /L /Q /FS:NTFS /A:64K
for my J: drive. Omit the Q if you get paid by the hour
for my J: drive. Omit the Q if you get paid by the hour
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: Error: related to file system limitation?
Sadly we are currently testing Acronis for the dumb silly reason that Veeam cannot (does not want to) split backup files in smaller chunks.
I "hope" we will run into some issues with Acronis since i do not want to lose our beloved Veeam
I "hope" we will run into some issues with Acronis since i do not want to lose our beloved Veeam
-
- VP, Product Management
- Posts: 27340
- Liked: 2782 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Error: related to file system limitation?
Hi Guido,
Just want to make sure we are on the same page with this - it is not something that we do not want to do, but there are certain features that have/had higher priority bringing more value to existing and new Veeam users.
Can you please tell me why re-configuring backup job with smaller amount of VMs does not work in your case?
Thanks!
Just want to make sure we are on the same page with this - it is not something that we do not want to do, but there are certain features that have/had higher priority bringing more value to existing and new Veeam users.
Can you please tell me why re-configuring backup job with smaller amount of VMs does not work in your case?
Thanks!
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: Error: related to file system limitation?
Hi Vitaliy,
We have quite some big VM's (2-4TB) which are mainly big databases, servers with legacy applications and also applications with licensing issues where data cannot be distributed between multiple servers.
Also we like to at least group some VM's together in backupjobs to get some dedupe for the primary repository and also speeding them up...
And thanks for your comment, as i understood it earlier threads you/veeam didn't see the benefit of smaller files (vs. a bit more admin overhead), my bad...
And thx for picking this up!
We have quite some big VM's (2-4TB) which are mainly big databases, servers with legacy applications and also applications with licensing issues where data cannot be distributed between multiple servers.
Also we like to at least group some VM's together in backupjobs to get some dedupe for the primary repository and also speeding them up...
And thanks for your comment, as i understood it earlier threads you/veeam didn't see the benefit of smaller files (vs. a bit more admin overhead), my bad...
And thx for picking this up!
-
- Service Provider
- Posts: 865
- Liked: 160 times
- Joined: Aug 26, 2013 7:46 am
- Full Name: Bastiaan van Haastrecht
- Location: The Netherlands
- Contact:
Re: Error: related to file system limitation?
Hi,
We are seeing this error months afte formatting the volume /L, the FileRecord segment is 4096:
And we get NTFS errors in the system event log:
So this means the file is more then 4096 fragmented in the dedupe, and now cant allow writes anymore. It's a shame MS has not set a limit on dedupping to avaid this.
So be warned, increasing the index does not mean you won't end up with a bogus file. I think this warning should be added in http://www.veeam.com/kb1893 .
We are seeing this error months afte formatting the volume /L, the FileRecord segment is 4096:
Code: Select all
NTFS Volume Serial Number : 0xa41aee3e1aee0cdc
NTFS Version : 3.1
LFS Version : 2.0
Number Sectors : 0x000000105fb96fff
Total Clusters : 0x0000000020bf72df
Free Clusters : 0x000000000446c286
Total Reserved : 0x0000000000000030
Bytes Per Sector : 512
Bytes Per Physical Sector : 512
Bytes Per Cluster : 65536
Bytes Per FileRecord Segment : 4096 <==== /L did it's job setting it from 1024 to 4096
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x000000000db00000
Mft Start Lcn : 0x000000000000c000
Mft2 Start Lcn : 0x0000000000000001
Mft Zone Start : 0x0000000003ceed80
Mft Zone End : 0x0000000003cefa20
Resource Manager Identifier : E2FE0971-C413-11E4-80C5-9CB6548CAF1D
Code: Select all
{Delayed Write Failed} Windows was unable to save all the data for the file F:\some.vbk; the data has been lost. This error may be caused if the device has been removed or the media is write-protected.
So be warned, increasing the index does not mean you won't end up with a bogus file. I think this warning should be added in http://www.veeam.com/kb1893 .
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
Veeam ProPartner, Service Provider and a proud Veeam Legend
-
- Veeam Software
- Posts: 21133
- Liked: 2139 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Error: related to file system limitation?
KB updated, thanks for the heads up!
-
- VP, Product Management
- Posts: 6029
- Liked: 2856 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Error: related to file system limitation?
I mentioned in a post above that it would still be possible to hit this limit and that increasing this only makes in less likely, however, you are the first person I've seen to actually hit the limit so I appreciate you sharing that. I'm wondering if you could provide any detail about the size of your backup file?b.vanhaastrecht wrote:So this means the file is more then 4096 fragmented in the dedupe, and now cant allow writes anymore. It's a shame MS has not set a limit on dedupping to avaid this.
-
- Service Provider
- Posts: 865
- Liked: 160 times
- Joined: Aug 26, 2013 7:46 am
- Full Name: Bastiaan van Haastrecht
- Location: The Netherlands
- Contact:
Re: Error: related to file system limitation?
Well, it was a slight surprise to us. We had dedupping running on a repository where forward and reversed backup file are stored in. We had excluded the reversed incremental folder, but somehow this exclude didn't got applied on this particular file. We think the reversed file was already in dedupe policy, and we didnt notice this.
So, it was a reversed incremental file of 1.5TB. After about 3 months daily backup the index of 4096 wasn't enough anymore.
So, it was a reversed incremental file of 1.5TB. After about 3 months daily backup the index of 4096 wasn't enough anymore.
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
Veeam ProPartner, Service Provider and a proud Veeam Legend
-
- Service Provider
- Posts: 17
- Liked: 4 times
- Joined: Sep 07, 2012 7:07 am
- Contact:
Re: Error: related to file system limitation?
Did you change the priority optimization to allow immediate defragmentation of large files as per https://technet.microsoft.com/en-us/lib ... 91438.aspx?
Tune performance for large scale operations—Run the following PowerShell script to:
Disable additional processing and I/O when deep garbage collection runs
Reserve additional memory for hash processing
Enable priority optimization to allow immediate defragmentation of large files
Set-ItemProperty -Path HKLM:\Cluster\Dedup -Name HashIndexFullKeyReservationPercent -Value 70
Set-ItemProperty -Path HKLM:\Cluster\Dedup -Name EnablePriorityOptimization -Value 1
These settings modify the following:
HashIndexFullKeyReservationPercent: This value controls how much of the optimization job memory is used for existing chunk hashes, versus new chunk hashes. At high scale, 70% results in better optimization throughput than the 50% default.
EnablePriorityOptimization: With files approaching 1TB, fragmentation of a single file can accumulate enough fragments to approach the per file limit. Optimization processing consolidates these fragments and prevents this limit from being reached. By setting this registry key, dedup will add an additional process to deal with highly fragmented deduped files with high priority.
-
- Service Provider
- Posts: 865
- Liked: 160 times
- Joined: Aug 26, 2013 7:46 am
- Full Name: Bastiaan van Haastrecht
- Location: The Netherlands
- Contact:
Re: Error: related to file system limitation?
No, wasn't aware of this option. Looks like this does not prevent it, but it will optimize these large files with higher priority, so the change to hit index limit is less high, but still possible.
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
Veeam ProPartner, Service Provider and a proud Veeam Legend
-
- Veeam ProPartner
- Posts: 21
- Liked: 2 times
- Joined: Feb 21, 2014 10:49 am
- Full Name: Daniel Ely
- Location: London, UK
- Contact:
Re: Error: related to file system limitation?
The article only seems to give the reg keys for a cluster setup, searching for HashIndexFullKeyReservationPercent or EnablePriorityOptimization only find articles with the same keys for a cluster environment.
The article does also talk about another key DeepGCInterval which does showup on some google searches as also living HKLM\System\CurrentControlSet\Services\ddpsvc\Settings my guess if that the above 2 keys can also be placed there, does anyone have any experience?
The article does also talk about another key DeepGCInterval which does showup on some google searches as also living HKLM\System\CurrentControlSet\Services\ddpsvc\Settings my guess if that the above 2 keys can also be placed there, does anyone have any experience?
-
- Enthusiast
- Posts: 61
- Liked: 1 time
- Joined: Feb 04, 2016 12:58 pm
- Full Name: RNT-Guy
- Contact:
Re: Error: related to file system limitation?
I'm confused, so should I be setting the cluster size to something larger? to What? Our largest file is 2TB (database image file). Should I use 8192 to be safe and just eat the disk space savings it loses?b.vanhaastrecht wrote:So, it was a reversed incremental file of 1.5TB. After about 3 months daily backup the index of 4096 wasn't enough anymore.
Also wondering if I there's any reason I can't just create another volume formatted correctly and copy the files there and then point veeam to that location? Will that constitute a reseed process instead of moving it back?
-
- Veeam Software
- Posts: 21133
- Liked: 2139 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Error: related to file system limitation?
Another option is to periodically compact backup file to defragment it.
Who is online
Users browsing this forum: AdsBot [Google], Bing [Bot], Google [Bot] and 196 guests