-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
2012 R2 Dedupe Issue
Just ran into this issue:
After running backups for several weeks – one of the jobs all of a sudden failed with the below error:
Error: Client error: The requested operation could not be completed due to a file system limitation. Failed to write data to the file [H:\Backups\Backup Job Name\Backupfile2014-10-19T110050.vib].
The requested operation could not be completed due to a file system limitation
I found this "solution" (from Pipitone Consulting)
Due to the fact that the built in Windows Deduplication role has scheduled tasks that trigger optimization, garbage collection, and scrubbing jobs, the files can get heavily fragmented. Microsoft also recommends not having files on a Deduplicated volume that are over 1TB in size. Microsoft has provided a fix for this in Server 2012 R2, however in order to take advantage of the fix, you’ll need to format the Deduplicated volume using the /L switch to support having larger files on a volume with Deduplication enabled. This switch will eliminate the file system limitation error.
So as long as Veeam does not offer an option to split backup files into chunks (i still do not understand why this won't be supported in the near future) be really carefull when using Windows Dedupe on big files.
You can check
fsutil fsinfo ntfsinfo H:
“Bytes Per FileRecord Segment: 4096“,
1024 is bad...
After running backups for several weeks – one of the jobs all of a sudden failed with the below error:
Error: Client error: The requested operation could not be completed due to a file system limitation. Failed to write data to the file [H:\Backups\Backup Job Name\Backupfile2014-10-19T110050.vib].
The requested operation could not be completed due to a file system limitation
I found this "solution" (from Pipitone Consulting)
Due to the fact that the built in Windows Deduplication role has scheduled tasks that trigger optimization, garbage collection, and scrubbing jobs, the files can get heavily fragmented. Microsoft also recommends not having files on a Deduplicated volume that are over 1TB in size. Microsoft has provided a fix for this in Server 2012 R2, however in order to take advantage of the fix, you’ll need to format the Deduplicated volume using the /L switch to support having larger files on a volume with Deduplication enabled. This switch will eliminate the file system limitation error.
So as long as Veeam does not offer an option to split backup files into chunks (i still do not understand why this won't be supported in the near future) be really carefull when using Windows Dedupe on big files.
You can check
fsutil fsinfo ntfsinfo H:
“Bytes Per FileRecord Segment: 4096“,
1024 is bad...
-
- Product Manager
- Posts: 20415
- Liked: 2302 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: 2012 R2 Dedupe Issue
I'll take responsibility and quote the explanation provided by our solutions Architect Tom Sightler; should clarify situation over /L switch and what it can be used for:
Thanks.tsightler wrote:So what does the /L switch do exactly? Well, it formats the NTFS filesystem with "file records" that a 4KB in size instead of 1KB. This is actually the default if you happen to be using modern disk with 4K hardware sector size, but that's still pretty rare and most disk and storage systems still report classic 512B blocks, or even if they use 4K, report only 512B for compatibility purposes (common for SSDs for example).
Anyway, the actual limit is pretty simple. NTFS typically as a single file record for every file stored in the MFT. This file record is 1KB and contains information such as the list of attributes on the file and it's actual location on the filesystem. As a file get's fragmented the location of each fragment is also recorded in the file record. For large files (or files that are compressed or deduped), the actual file can be made up of many, many chunks and thus the number of entries in the file record exceeds the 1K limit. This is actually OK as a file can have mulitple file records, however, once a file has more than one file record, the pointer information to the file records are stored in an attribute list and unfortunately there can only be a fixed number of these structures for any given file thus providing a hard limit on the total number of fragments that any given file can be composed of. Using 4K file records instead of the default 1K file records roughly increases the number of file records, and thus total fragments for any given file, by 4x, actually slightly more than that since the normal 1K records had some overhead for attributes. This makes the limit of the number of fragments that can compose any single file much more difficult to hit, but still not impossible.
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: 2012 R2 Dedupe Issue
Yep... that's a perfect explanation.
Anyway, i still do think it SHOULD be able to create smaller chunks, make a lot of things easier, like copying away backup files etc (can be multithreaded then etc...).
I spoke to some of your team's guys regarding this @Vmworld, but everybody told me to talk to Anton, who was incredably hard to find, also at the Veeam Party where i managed to get my hands on a VIP led thing
The only valid reason i heard not making chunks is the number of files which will be created. If you ask me this could easily be solved with putting backup files in subfolders or something...
Anyway, i still do think it SHOULD be able to create smaller chunks, make a lot of things easier, like copying away backup files etc (can be multithreaded then etc...).
I spoke to some of your team's guys regarding this @Vmworld, but everybody told me to talk to Anton, who was incredably hard to find, also at the Veeam Party where i managed to get my hands on a VIP led thing
The only valid reason i heard not making chunks is the number of files which will be created. If you ask me this could easily be solved with putting backup files in subfolders or something...
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: 2012 R2 Dedupe Issue
Not related to your situation (dedupe at target) but related to Windows dedupe in a VM being backuped by Veeam. This is a lesson we learned with huge servers (many TBs)
One should carefuly configure schedule for weekly dedupe tasks (optimize and garbage collection) since it could interfere with veeam backup jobs. Default schedule occurs on saturday morning (arround 3am), since many of us are running backups at night you could have these tasks making many block change (garbage collection) when the VM is running on VM snapshot with VSS snapshot at guest level. This could lead, depending of your configuration, to datastore runing out of free space very quickly.
One should carefuly configure schedule for weekly dedupe tasks (optimize and garbage collection) since it could interfere with veeam backup jobs. Default schedule occurs on saturday morning (arround 3am), since many of us are running backups at night you could have these tasks making many block change (garbage collection) when the VM is running on VM snapshot with VSS snapshot at guest level. This could lead, depending of your configuration, to datastore runing out of free space very quickly.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: 2012 R2 Dedupe Issue
Just to carlify, and I know this was not your quote but rather a statement from the site you referenced, but it does not eliminate the limitation, it simply makes the limitation harder to hit by providing four times as much space for file records. I would still strongly recommend keeping files no bigger that 1TB or at least only marginally larger if you are attempting to use Windows 2012/2012R2 dedupe as dedupe performance degrades quite a bit with files larger than this.Delo123 wrote:This switch will eliminate the file system limitation error.
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: 2012 R2 Dedupe Issue
I agree what you say... We have backup files which also are around 2-4TB, and no, we cannot change that at this moment
Regarding learning lessons I also agree files larger then 1TB are a smoking gun. I have been trying to convince veeam to create an option (i don't care if advanced, hidden, on own risk etc...)
to let us set a chunksize for backup files so these "issues" would not be there, but apparently it hasn't got the highest priority.
Sadly enough i guess we have to wait for the first big big customer to lose an entire dedupe repository they actually haven't copied away yet... i am very sure it will happen sometime....
Anyway, what we do to at least "minimize" the risk with tremendous effort is to make backup copies, then RAR them in multiple (5GB) chunks, and then dedupe them with windows.
This way dedupe runs much more efficient, we can schedule it to run multiple times a day (because it can pick up where it left), have much better performance etc...
Restoring of course is a big Problem because we first need to unpack etc... but hey, switching to acronis just to be able to split up backups isn't an option i would really like to think about :(:(
Ps. We lost on big dedupe repository last night (first time i had a dedupe volume failing on me) with some strange error message. After a reboot windows fixed the error for which i am very glad since it holds appr. 70TB of backups deduped to 29TB... Now to get Veeam to find the backups on the repositories again
Regarding learning lessons I also agree files larger then 1TB are a smoking gun. I have been trying to convince veeam to create an option (i don't care if advanced, hidden, on own risk etc...)
to let us set a chunksize for backup files so these "issues" would not be there, but apparently it hasn't got the highest priority.
Sadly enough i guess we have to wait for the first big big customer to lose an entire dedupe repository they actually haven't copied away yet... i am very sure it will happen sometime....
Anyway, what we do to at least "minimize" the risk with tremendous effort is to make backup copies, then RAR them in multiple (5GB) chunks, and then dedupe them with windows.
This way dedupe runs much more efficient, we can schedule it to run multiple times a day (because it can pick up where it left), have much better performance etc...
Restoring of course is a big Problem because we first need to unpack etc... but hey, switching to acronis just to be able to split up backups isn't an option i would really like to think about :(:(
Ps. We lost on big dedupe repository last night (first time i had a dedupe volume failing on me) with some strange error message. After a reboot windows fixed the error for which i am very glad since it holds appr. 70TB of backups deduped to 29TB... Now to get Veeam to find the backups on the repositories again
-
- Chief Product Officer
- Posts: 31816
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: 2012 R2 Dedupe Issue
Agree. In my "2014 VMware Backup Best Practices" breakout session at VeeamON two weeks ago, I was recommending against using Windows Server 2012 R2 deduplication with backup repositories for scalability reasons.
-
- Influencer
- Posts: 13
- Liked: 3 times
- Joined: Jan 22, 2013 5:36 am
- Contact:
Re: 2012 R2 Dedupe Issue
Speaking of the VeeamON breakout sessions, I would love to see the recorded versions of those. Did Veeam already post those somewhere?
Thanks!
Thanks!
-
- Chief Product Officer
- Posts: 31816
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: 2012 R2 Dedupe Issue
Not yet, but I believe the intention was to make it available to attendees only...
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: 2012 R2 Dedupe Issue
When choosing a technology I always evaluate all the prerequisites, constraints and best practices arround it. Since the first beta of w2k12 Microsoft clearly pointed out the fact that file level deduplication was designed for small to medium sized files, with the 1tb file limit.Thus I never considered windows file level dedupe as a production solution for veeam repositories. Nevertheless, it is really good for "all purpose" file servers (MS Office files and so on) with high dedupe ratios.
You can't blame a product for not being compatible with an unsupported use of another product.
You can't blame a product for not being compatible with an unsupported use of another product.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: 2012 R2 Dedupe Issue
The final decision has not been taken yet, I do personally hope everyone will be able to see them because there is some really valuable content there!Gostev wrote:Not yet, but I believe the intention was to make it available to attendees only...
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Influencer
- Posts: 13
- Liked: 3 times
- Joined: Jan 22, 2013 5:36 am
- Contact:
Re: 2012 R2 Dedupe Issue
Great to hear! I hope to hear good news about that soon!
-
- Chief Product Officer
- Posts: 31816
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: 2012 R2 Dedupe Issue
Probably, not what you wanted to hear... but the final decision is that VeeamON breakout sessions will be available to attendees and Veeam ProPartners only.
-
- Expert
- Posts: 227
- Liked: 62 times
- Joined: Apr 10, 2014 4:13 pm
- Contact:
Re: 2012 R2 Dedupe Issue
1. I was there, and I missed your session. How do I watch the session after the fact?Agree. In my "2014 VMware Backup Best Practices" breakout session at VeeamON two weeks ago, I was recommending against using Windows Server 2012 R2 deduplication with backup repositories for scalability reasons.
2. Can you elaborate more on it here anyway?
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: 2012 R2 Dedupe Issue
I still didn't see an Answer why Veeam doesn't want to add the ability to split Backup Files into chunks...
If you don't trust your customers are able to handle more files, you could always make it some super hidden, "may the force be with you" switch.
Anyway, i do not see the Reason "This would produce a lot of files" as a valid one, now we have lot's of vib's anyway, just put them in subfolders or somethng...
If you don't trust your customers are able to handle more files, you could always make it some super hidden, "may the force be with you" switch.
Anyway, i do not see the Reason "This would produce a lot of files" as a valid one, now we have lot's of vib's anyway, just put them in subfolders or somethng...
-
- Service Provider
- Posts: 295
- Liked: 46 times
- Joined: Jun 30, 2015 9:13 am
- Full Name: Stephan Lang
- Location: Austria
- Contact:
Re: 2012 R2 Dedupe Issue
Hi, just for you information and i found this right now while looking into veeam store dedup.... my vbk files are around 2.5tb and i thought about enabling dedup on 2012r2... but i read what this isn't a good idea right now... but have a look here:
http://blogs.technet.com/b/filecab/arch ... iew-2.aspx
with server 10 (2016) there is a solution comming soon
http://blogs.technet.com/b/filecab/arch ... iew-2.aspx
with server 10 (2016) there is a solution comming soon
Dedup Improvement #2: File sizes up to 1TB are good for dedup
While the current version of Windows Server supports the use of file sizes up to 1TB, files “approaching” this size are noted as “not good candidates” for dedup. The reasons have to do with how the current algorithms scale, where, for example, things like scanning for and inserting changes can slow down as the total data set increases. This has all been redesigned for Windows Server 2016 with the use of new stream map structures and improved partial file optimization, with the results being that you can go ahead and dedup files up to 1TB without worrying about them not being good candidates. These changes also improve overall optimization performance by the way, adding to the “performance” part of the story for Windows Server 2016.
-
- Veteran
- Posts: 385
- Liked: 39 times
- Joined: Oct 17, 2013 10:02 am
- Full Name: Mark
- Location: UK
- Contact:
Re: 2012 R2 Dedupe Issue
"with the results being that you can go ahead and dedup files up to 1TB"
Well that still isn't good for most of us, maybe 5TB would be helpful but 1TB is tiny when you're talking about VBKs.
Its a shame, I was hoping 2016 would work - it could potentially save us ££££'s in storage.
Well that still isn't good for most of us, maybe 5TB would be helpful but 1TB is tiny when you're talking about VBKs.
Its a shame, I was hoping 2016 would work - it could potentially save us ££££'s in storage.
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: 2012 R2 Dedupe Issue
Spillting up VBK's in chunk would be a solution for veeam, but not possible i guess not much customer demand for bigger backups on dedup volumes
-
- Chief Product Officer
- Posts: 31816
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: 2012 R2 Dedupe Issue
We will approach this differently, stay tuned for more v9 announcements
But you can certainly count on using Server 2016 dedupe with Veeam v9.
But you can certainly count on using Server 2016 dedupe with Veeam v9.
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: 2012 R2 Dedupe Issue
Cool... party!!!
-
- Veteran
- Posts: 385
- Liked: 39 times
- Joined: Oct 17, 2013 10:02 am
- Full Name: Mark
- Location: UK
- Contact:
Re: 2012 R2 Dedupe Issue
Interesting, but I wont hold my breath - there was lots of raving about 2012R2 dedupe, blogs and the like - but it never workedGostev wrote:We will approach this differently, stay tuned for more v9 announcements
But you can certainly count on using Server 2016 dedupe with Veeam v9.
Trouble is, its so hard to test, as you need decent hardware and lots of VM's and a real production environment - as it all works fine in a test lab with small VBKs!
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: 2012 R2 Dedupe Issue
I wouldn't say it isn't working. On top of Veeam Dedupe and Compression we save another 90TB on a 108TB Storage Pool...
PS C:\Windows\system32> get-dedupstatus
FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
--------- ---------- -------------- ------------- ------
51.86 TB 7.31 TB 9873 9872 J:
34.52 TB 32.42 TB 433 433 G:
32.23 TB 26.39 TB 498 498 H:
49.96 TB 28.08 TB 284 284 I:
VBK's range from 300GB to 4TB and this server is running flawlessy since 2 years, however the hardware is underpowered...
At first we worked with one virtual 62TB volume but had issues with dedupe job runtimes, thus we created multiple virtual thin volumes from the same storagepool.
However this only works on forward incremental backups with appr. 2 fulls every 2 weeks.
We have another server for backup copies but never got this to work completly as GFS rebuilds the vbk's from existing files which kills performance.
It would be nicer to receive the vbk's from the primary backup repository instead....
PS C:\Windows\system32> get-dedupstatus
FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
--------- ---------- -------------- ------------- ------
51.86 TB 7.31 TB 9873 9872 J:
34.52 TB 32.42 TB 433 433 G:
32.23 TB 26.39 TB 498 498 H:
49.96 TB 28.08 TB 284 284 I:
VBK's range from 300GB to 4TB and this server is running flawlessy since 2 years, however the hardware is underpowered...
At first we worked with one virtual 62TB volume but had issues with dedupe job runtimes, thus we created multiple virtual thin volumes from the same storagepool.
However this only works on forward incremental backups with appr. 2 fulls every 2 weeks.
We have another server for backup copies but never got this to work completly as GFS rebuilds the vbk's from existing files which kills performance.
It would be nicer to receive the vbk's from the primary backup repository instead....
-
- Chief Product Officer
- Posts: 31816
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: 2012 R2 Dedupe Issue
Guido said it all. While I cannot promise the Windows Server 2016 dedupe will work for everyone and in every scenario, it is obvious that it's going to be vastly improved over the current version, which in turn is already good enough for a number of smaller customers (based on feedback that can be found on these very forums).
Yes, we are adding Active Full (so to speak) as an option to v9 Backup Copy jobs, specifically to better support local Backup Copy to a deduplicating storage.Delo123 wrote:It would be nicer to receive the vbk's from the primary backup repository instead....
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: 2012 R2 Dedupe Issue
Active full for backup copies
Anton, i really need to buy you a drink or two @Vmworld
Anton, i really need to buy you a drink or two @Vmworld
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: 2012 R2 Dedupe Issue
When we will have finished the announcements for v9, I think a dozen beers would be a better reward
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
-
- Veeam Vanguard
- Posts: 227
- Liked: 55 times
- Joined: Jan 13, 2011 5:42 pm
- Full Name: Jim Jones
- Location: Hurricane, WV
- Contact:
Re: 2012 R2 Dedupe Issue
For those on the fence I agree with Guido in that Server 2012 R2 dedupe for backup files does serve its purpose. I ran into the original issue in this post about 1.5 years ago and then reformatted with the /L. I haven't hit an issue with it since (current status: repeatedly banging hand on wooden desk). Further It allows me to hold about 80 TB of data in about 4 TB on disk (See below). I will say if you are going to do it change the dedupe window to 8 days, that will allow you to perform your synthetic fulls on non-deduplicated files. Otherwise system load goes through the roof when you do roll ups.
PS C:\Windows\system32> Get-DedupStatus
FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
--------- ---------- -------------- ------------- ------
6.55 TB 63.56 TB 854 849 D:
5.47 TB 15.58 TB 787 787 E:
PS C:\Windows\system32> Get-DedupStatus
FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
--------- ---------- -------------- ------------- ------
6.55 TB 63.56 TB 854 849 D:
5.47 TB 15.58 TB 787 787 E:
Jim Jones, Sr. Product Infrastructure Architect @iland / @1111systems, Veeam Vanguard
-
- Expert
- Posts: 194
- Liked: 18 times
- Joined: Apr 16, 2015 9:01 am
- Location: Germany / Bulgaria
- Contact:
Re: 2012 R2 Dedupe Issue
Now we are in February 2016 so I would like to ask again kindly if there is any chance for mere mortals to get the VeeamON recordings or at least presentation files regarding Server 2012 R2 deduplication best practices.Gostev wrote:Probably, not what you wanted to hear... but the final decision is that VeeamON breakout sessions will be available to attendees and Veeam ProPartners only.
It is your product we want to buy
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: 2012 R2 Dedupe Issue
@Anguel A lot has been written about how to setup deduped repo's.
Do you have any specific questions or worries? In our case we have saved over 445TB's of storage on 108TB physical and still have 15TB free
FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
--------- ---------- -------------- ------------- ------
53.82 TB 54.03 TB 334 335 I:
52.78 TB 48.9 TB 197 195 J:
42.6 TB 96.52 TB 303 303 G:
23.85 TB 247.57 TB 653 666 H:
Do you have any specific questions or worries? In our case we have saved over 445TB's of storage on 108TB physical and still have 15TB free
FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
--------- ---------- -------------- ------------- ------
53.82 TB 54.03 TB 334 335 I:
52.78 TB 48.9 TB 197 195 J:
42.6 TB 96.52 TB 303 303 G:
23.85 TB 247.57 TB 653 666 H:
Who is online
Users browsing this forum: No registered users and 66 guests