-
- Product Manager
- Posts: 20406
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Doesn't your tape device itself provide compression? Thanks.
-
- Expert
- Posts: 227
- Liked: 62 times
- Joined: Apr 10, 2014 4:13 pm
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
I do the same thing, so I just made sure hardware compression was turned on for the tape jobs.
-
- Enthusiast
- Posts: 35
- Liked: 7 times
- Joined: Jun 24, 2013 9:43 am
- Full Name: Hussain Mahfood
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
@lightsout
I do the same but veeam compression is better. I would stick with hardware compression till veeam provide a feature for this.
@Vladimir Eremin
Thanks I do have it enabled and was thinking to utilize veeam compression instead
I do the same but veeam compression is better. I would stick with hardware compression till veeam provide a feature for this.
@Vladimir Eremin
Thanks I do have it enabled and was thinking to utilize veeam compression instead
-
- Product Manager
- Posts: 20406
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
I think by disabling software compression in source job and enabling hardware one in secondary (tape) job you would get both the best deduplication ratio in primary repository and decent compression on tapes.
So, to me it sounds like the best approach.
Thanks.
So, to me it sounds like the best approach.
Thanks.
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Feb 29, 2016 7:08 pm
- Full Name: Kenneth Dalbjerg
- Contact:
[MERGED]: Tape & Windows 2012 R2 Dedupe
Hi
we have a dedicated windows 2012 R2 server, with veeam install on it.
It have a 40GB diskdrive to temporally store Backup on, and then it move the backups to Tape.
Could we enable dedupe on the 40GB disk drive, to save some space. Or will this brake the tape backups?
Regards Kenneth Dalbjerg
we have a dedicated windows 2012 R2 server, with veeam install on it.
It have a 40GB diskdrive to temporally store Backup on, and then it move the backups to Tape.
Could we enable dedupe on the 40GB disk drive, to save some space. Or will this brake the tape backups?
Regards Kenneth Dalbjerg
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Hi,
Thank you.
You can do that, it won't harm, however it may slowdown the backup process. You should configure the jobs for active full backups plus incremental backups - since jobs with transformation will require block "de-hydration" and then "re-hydration" on the storage. These operations might require significant time. The same applies to Backup to Tape job - prior to writing to tape your backups will have to be "re-hydrated" first. Please review this thread for more details in regards to Win2012R2 deduplication.Could we enable dedupe on the 40GB disk drive, to save some space. Or will this brake the tape backups?
Thank you.
-
- Veteran
- Posts: 257
- Liked: 40 times
- Joined: May 21, 2013 9:08 pm
- Full Name: Alan Wells
- Contact:
Re: [MERGED]: Tape & Windows 2012 R2 Dedupe
You will be fine with this. We have a scale-out repository with 10 very large extents assigned to it adding up to over 100Tb of space.dalbjerg wrote:we have a dedicated windows 2012 R2 server, with veeam install on it.
It have a 40GB diskdrive to temporally store Backup on, and then it move the backups to Tape.
Could we enable dedupe on the 40GB disk drive, to save some space. Or will this brake the tape backups?
I have Windows 2012 Dedupe running on all of my extents and have no problems.
Remember Dedupe works best when you are running active full backups. I set my volume deduplication settings to dedupe files older than 2 days. That way they get written to disk and then to tape before the get deduped out. Doing it this way you will have no disk resource issues.
I will give you an example. I have a single drive with 18Tb of space. I have 3.14Tb of Free Space on the drive currently but I am storing 31.9Tb worth of data on that drive. I have deduped out 52% of that used space. Not Bad!
-
- Enthusiast
- Posts: 61
- Liked: 1 time
- Joined: Feb 04, 2016 12:58 pm
- Full Name: RNT-Guy
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Hi everyone. been going through these recommendations and I wonder if our situation is slightly different. We have several customers where we backup their VMs to our own Veeam host that has 21TB of usable space after raid, hotspare, etc, before compression & deduplication. After we back up their data to our host we send that data to a Veeam cloud host. They use their own storage and don't use Windows Server 2012.
Our goals are this:
1. Meet customer backup windows. Restore windows are reasonable and not typically a sticking point.
2. Use as little space as possible on our veeam host on prem at customer since we pay for this unit out of pocket and adding more space is not free.
3. use as little space as possible with the hosting provider. we get charged and thus the customer gets charged. the less expensive it is the more likely the customer will choose our backup solution.
As of now we're doing incrementals with a weekly synthetic to help with the backup window issue. We don't schedule active fulls.
We have deduplication running on our repository server on our host. we have a wan accelerator as does the cloud host which helps reduce the copy job time.
because cost is usually the first sticking point we're trying to figure out how to keep the size of the backups at the cloud provider as small as possible. unfortunately we have to assume they'd be using cheap disk that has no compression/deduplication built in so it's up to us to keep the files small.
The backup jobs are set to dedupe-friendly compression witht local target for storage optimization. The copy job is set to high or extreme compression.
we have win2012r2 deduplication set to 0 days so that it dedupes it right away keeping our size on disk low. if we have a big enough buffer I could see us letting it go 2 days so that the copy job isn't recompacting the data on the way out. However the copy jobs start within minutes or hours of the backup jobs finishing and this is at night whereas the deduplication doesn't start until 10am so it's pretty unlikely it would be unpacking deduped data just to copy offsite.
hopefully this dump of info for you wasn't too much but easy to navigate. My apologies if not.
What I'm wondering is
a. what is the combination that results in the smallest files at the cloud provider assuming they don't do their own compression & deduplication?
b. what speeds up the backups the best given our goals?
c. will meeting a&b necessitate using more space on our onsite host?
* side note: our host is dedicated to veeam with internal storage so there's plenty of cpu and memory. disk is the limited space without buying another set of repository space.
Our goals are this:
1. Meet customer backup windows. Restore windows are reasonable and not typically a sticking point.
2. Use as little space as possible on our veeam host on prem at customer since we pay for this unit out of pocket and adding more space is not free.
3. use as little space as possible with the hosting provider. we get charged and thus the customer gets charged. the less expensive it is the more likely the customer will choose our backup solution.
As of now we're doing incrementals with a weekly synthetic to help with the backup window issue. We don't schedule active fulls.
We have deduplication running on our repository server on our host. we have a wan accelerator as does the cloud host which helps reduce the copy job time.
because cost is usually the first sticking point we're trying to figure out how to keep the size of the backups at the cloud provider as small as possible. unfortunately we have to assume they'd be using cheap disk that has no compression/deduplication built in so it's up to us to keep the files small.
The backup jobs are set to dedupe-friendly compression witht local target for storage optimization. The copy job is set to high or extreme compression.
we have win2012r2 deduplication set to 0 days so that it dedupes it right away keeping our size on disk low. if we have a big enough buffer I could see us letting it go 2 days so that the copy job isn't recompacting the data on the way out. However the copy jobs start within minutes or hours of the backup jobs finishing and this is at night whereas the deduplication doesn't start until 10am so it's pretty unlikely it would be unpacking deduped data just to copy offsite.
hopefully this dump of info for you wasn't too much but easy to navigate. My apologies if not.
What I'm wondering is
a. what is the combination that results in the smallest files at the cloud provider assuming they don't do their own compression & deduplication?
b. what speeds up the backups the best given our goals?
c. will meeting a&b necessitate using more space on our onsite host?
* side note: our host is dedicated to veeam with internal storage so there's plenty of cpu and memory. disk is the limited space without buying another set of repository space.
-
- Veteran
- Posts: 941
- Liked: 53 times
- Joined: Nov 05, 2009 12:24 pm
- Location: Sydney, NSW
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Cool, thansk for sharing this tips in concise format.lightsout wrote:I'll give my feedback on 2012 dedup best practices.
- Format the disk using the command line "/L" for "large size file records".
- Also format using 64KB cluster size.
- Use Windows 2012 R2. Apply all patches as some rollups have improvements to dedup.
- Use Active full jobs with incrementals.
- Turn Veeam's compression off and use the "LAN" block size. Veeam's deduplication can stay on. This gave best overall space savings for me.
- If possible, spread your active full backups over the entire week. I have a script do it if you're interested.
- Modify the garbage collection schedule to run daily rather than weekly.
- Try to keep your VBK files below 1TB in size - Microsoft doesn't official support files bigger than this. Large files take a long time to dedup and will have to be fully reprocessed if the process is interrupted. I've had 4TB VBK process fine, it just take a long time!
- Use multiple volumes, where possible. Windows dedup is single threaded, but it can process multiple volumes at once. Although bigger volumes mean better dedup ratios!
- Configure your dedup process to run once a day, and for as long as possible.
--
/* Veeam software enthusiast user & supporter ! */
/* Veeam software enthusiast user & supporter ! */
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
@rnt-guy
a. Overall, your setup looks good in terms of meeting your goals. One suggestion could be to use the 'WAN target' setting that will allow to decrease the backup files size due to the smaller block size it uses.
b. Depends on the job bottleneck stats.
a. Overall, your setup looks good in terms of meeting your goals. One suggestion could be to use the 'WAN target' setting that will allow to decrease the backup files size due to the smaller block size it uses.
b. Depends on the job bottleneck stats.
-
- Veteran
- Posts: 941
- Liked: 53 times
- Joined: Nov 05, 2009 12:24 pm
- Location: Sydney, NSW
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Hi LightsOut,
I'm interested of the script, may I know what is that script for ?[*]If possible, spread your active full backups over the entire week. I have a script do it if you're interested.
I've got some of my Exchange Servers & SQL Servers backup to be larger than 8 TB .VBK, so do I just skip this backup volume ?[*]Try to keep your VBK files below 1TB in size - Microsoft doesn't official support files bigger than this. Large files take a long time to dedup and will have to be fully reprocessed if the process is interrupted. I've had 4TB VBK process fine, it just take a long time!
If the backup copy job is running continuously does it still possible to run the deduplication ?[*]Configure your dedup process to run once a day, and for as long as possible.
--
/* Veeam software enthusiast user & supporter ! */
/* Veeam software enthusiast user & supporter ! */
-
- Veteran
- Posts: 941
- Liked: 53 times
- Joined: Nov 05, 2009 12:24 pm
- Location: Sydney, NSW
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Hi Alex,foggy wrote:@rnt-guy
a. Overall, your setup looks good in terms of meeting your goals. One suggestion could be to use the 'WAN target' setting that will allow to decrease the backup files size due to the smaller block size it uses.
b. Depends on the job bottleneck stats.
Does it means the smaller than or equal WAN target SAN Stripe / Block size means the backup size can also be smaller in size ?
--
/* Veeam software enthusiast user & supporter ! */
/* Veeam software enthusiast user & supporter ! */
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
This relates to the fact that with smaller block size you would not need to copy, say, entire 1MB block if only 1KB has changed in it, but 256KB only (if you switch to WAN target).
-
- Expert
- Posts: 227
- Liked: 62 times
- Joined: Apr 10, 2014 4:13 pm
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
albertwt wrote:Hi LightsOut,
I'm interested of the script, may I know what is that script for ?
I've got some of my Exchange Servers & SQL Servers backup to be larger than 8 TB .VBK, so do I just skip this backup volume ?
If the backup copy job is running continuously does it still possible to run the deduplication ?
- 1. So here is my code. This will cycle the full backups on a day of a week. 2. I'd suggest skipping those, until Windows 2016. They will take a long time to dedup.
Code: Select all
$days=@("Friday", "Saturday", "Sunday", "Monday", "Tuesday", "Wednesday", "Thursday") $i=0 $jobs=get-vbrjob | ? {$_.JobType -eq "Backup" } foreach ($job in $jobs) { $job.Name $job | Set-VBRJobAdvancedBackupOptions -EnableFullBackup $true -FullBackupDays $days[$i] -FullBackupScheduleKind Daily -DayOfWeek $days[$i] | out-null $i++ if ($i -ge $days.count) { $i = 0 } }
3. Yes, you can run both together, just make sure you there is enough I/O for the dedup process to run and your jobs to complete in their windows.
-
- Veteran
- Posts: 941
- Liked: 53 times
- Joined: Nov 05, 2009 12:24 pm
- Location: Sydney, NSW
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Cool, many thanks for the clarification.
--
/* Veeam software enthusiast user & supporter ! */
/* Veeam software enthusiast user & supporter ! */
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
If I understood correctly, Windows 2016 will even be worse for 1TB+ files. Currently 1TB+ files are being deduped but it takes quite long which isn't that much of an issue anymore due to Veeam's scale out repositories. As incrementals now can go to different volumes the dedupe job on the fulls doesn't have to be interrupted. However in Server 2016 i read that windows dedupe will only dedupe the first TB so for a 8TB file, 1TB will be deduped, 7 will not That's why will still hope Veeam will come with an option to spilt vbk's in smaller chunks...
-
- Expert
- Posts: 227
- Liked: 62 times
- Joined: Apr 10, 2014 4:13 pm
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
I've not heard that, but I guess we will find out!
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
First almost 4TB actually, although we did not re-test on RTM yet. That 1TB is the true limit for modified files, however backup files are never touched once created (at least with the recommended job settings for deduplicating storage).Delo123 wrote:However in Server 2016 i read that windows dedupe will only dedupe the first TB so for a 8TB file
-
- Enthusiast
- Posts: 89
- Liked: 35 times
- Joined: May 09, 2016 2:34 pm
- Full Name: JM Severino
- Location: Switzerland
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Hi
I have not read the documentation for W2016, but dedup seems not to be supported on W2012R2 with volumes larger than 64TB. It is a VSS issue rather than a dedup problem. Windows will let you activate deduplication and won't warn you, but it will produce all kind of weird errors, mostly of the kind of "The parameter is incorrect."
https://support.microsoft.com/en-us/kb/2967756
Regards.
I have not read the documentation for W2016, but dedup seems not to be supported on W2012R2 with volumes larger than 64TB. It is a VSS issue rather than a dedup problem. Windows will let you activate deduplication and won't warn you, but it will produce all kind of weird errors, mostly of the kind of "The parameter is incorrect."
https://support.microsoft.com/en-us/kb/2967756
Regards.
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Hi Steve, that is correct for that reason most of create a storage pool and carve multiple thin volumes from that each just under 64TB. Backups and even better scale out repositories splitting up fulls and incrementals can de distributed among these giving us multiple dedupe threads etc.
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Feb 08, 2017 9:04 am
- Full Name: Dave
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Hello
Just to save someone else from any pain when formatting very large volumes.
You can use the /q switch combined with /L
format g: /A:64k /q returns (as expecteded):
C:\Users\administrator>fsutil fsinfo ntfsinfo g:
NTFS Volume Serial Number : 0x0422992622991e2c
NTFS Version : 3.1
LFS Version : 2.0
Number Sectors : 0x0000001b491defff
Total Clusters : 0x0000000036923bdf
Free Clusters : 0x0000000036922a0e
Total Reserved : 0x0000000000000040
Bytes Per Sector : 512
Bytes Per Physical Sector : 4096
Bytes Per Cluster : 65536
Bytes Per FileRecord Segment : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x0000000000010000
Mft Start Lcn : 0x000000000000c000
Mft2 Start Lcn : 0x0000000000000001
Mft Zone Start : 0x000000000000c000
Mft Zone End : 0x000000000000cca0
Max Device Trim Extent Count : 0
Max Device Trim Byte Count : 0x0
Max Volume Trim Extent Count : 62
Max Volume Trim Byte Count : 0x40000000
Resource Manager Identifier : A4A80A8E-ED80-11E6-90E7-00259069050F
However using the /q switch as well:
C:\Users\administrator>format g: /A:64k /L /q
The type of the file system is NTFS.
Enter current volume label for drive G: Veeam Extent 2
WARNING, ALL DATA ON NON-REMOVABLE DISK
DRIVE G: WILL BE LOST!
Proceed with Format (Y/N)? y
QuickFormatting 54.6 TB
Volume label (32 characters, ENTER for none)? Veeam Extent 2
Creating file system structures.
Format complete.
54.6 TB total disk space.
54.6 TB are available.
C:\Users\administrator>fsutil fsinfo ntfsinfo g:
NTFS Volume Serial Number : 0x98c4dbf1c4dbcf9c
NTFS Version : 3.1
LFS Version : 2.0
Number Sectors : 0x0000001b491defff
Total Clusters : 0x0000000036923bdf
Free Clusters : 0x0000000036922a01
Total Reserved : 0x0000000000000040
Bytes Per Sector : 512
Bytes Per Physical Sector : 4096
Bytes Per Cluster : 65536
Bytes Per FileRecord Segment : 4096
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x0000000000100000
Mft Start Lcn : 0x000000000000c000
Mft2 Start Lcn : 0x0000000000000001
Mft Zone Start : 0x000000000000c000
Mft Zone End : 0x000000000000cca0
Max Device Trim Extent Count : 0
Max Device Trim Byte Count : 0x0
Max Volume Trim Extent Count : 62
Max Volume Trim Byte Count : 0x40000000
Resource Manager Identifier : A4A80A99-ED80-11E6-90E7-00259069050F
That's saved my several days formatting 2x 54TB drives - hope someone finds it helpful.
Just to save someone else from any pain when formatting very large volumes.
You can use the /q switch combined with /L
format g: /A:64k /q returns (as expecteded):
C:\Users\administrator>fsutil fsinfo ntfsinfo g:
NTFS Volume Serial Number : 0x0422992622991e2c
NTFS Version : 3.1
LFS Version : 2.0
Number Sectors : 0x0000001b491defff
Total Clusters : 0x0000000036923bdf
Free Clusters : 0x0000000036922a0e
Total Reserved : 0x0000000000000040
Bytes Per Sector : 512
Bytes Per Physical Sector : 4096
Bytes Per Cluster : 65536
Bytes Per FileRecord Segment : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x0000000000010000
Mft Start Lcn : 0x000000000000c000
Mft2 Start Lcn : 0x0000000000000001
Mft Zone Start : 0x000000000000c000
Mft Zone End : 0x000000000000cca0
Max Device Trim Extent Count : 0
Max Device Trim Byte Count : 0x0
Max Volume Trim Extent Count : 62
Max Volume Trim Byte Count : 0x40000000
Resource Manager Identifier : A4A80A8E-ED80-11E6-90E7-00259069050F
However using the /q switch as well:
C:\Users\administrator>format g: /A:64k /L /q
The type of the file system is NTFS.
Enter current volume label for drive G: Veeam Extent 2
WARNING, ALL DATA ON NON-REMOVABLE DISK
DRIVE G: WILL BE LOST!
Proceed with Format (Y/N)? y
QuickFormatting 54.6 TB
Volume label (32 characters, ENTER for none)? Veeam Extent 2
Creating file system structures.
Format complete.
54.6 TB total disk space.
54.6 TB are available.
C:\Users\administrator>fsutil fsinfo ntfsinfo g:
NTFS Volume Serial Number : 0x98c4dbf1c4dbcf9c
NTFS Version : 3.1
LFS Version : 2.0
Number Sectors : 0x0000001b491defff
Total Clusters : 0x0000000036923bdf
Free Clusters : 0x0000000036922a01
Total Reserved : 0x0000000000000040
Bytes Per Sector : 512
Bytes Per Physical Sector : 4096
Bytes Per Cluster : 65536
Bytes Per FileRecord Segment : 4096
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x0000000000100000
Mft Start Lcn : 0x000000000000c000
Mft2 Start Lcn : 0x0000000000000001
Mft Zone Start : 0x000000000000c000
Mft Zone End : 0x000000000000cca0
Max Device Trim Extent Count : 0
Max Device Trim Byte Count : 0x0
Max Volume Trim Extent Count : 62
Max Volume Trim Byte Count : 0x40000000
Resource Manager Identifier : A4A80A99-ED80-11E6-90E7-00259069050F
That's saved my several days formatting 2x 54TB drives - hope someone finds it helpful.
-
- Veteran
- Posts: 941
- Liked: 53 times
- Joined: Nov 05, 2009 12:24 pm
- Location: Sydney, NSW
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Hi,cheese wrote:Bytes Per FileRecord Segment : 4096
Does the Bytes Per FileRecord Segment : 1024 must be 4096 ?
I usually do it using the below PowerShell script:
Code: Select all
function FormatDisk([string]$driveletter, [string]$drivelabel)
{
Format-Volume `
-DriveLetter $driveletter `
-NewFileSystemLabel $drivelabel `
-FileSystem NTFS `
-AllocationUnitSize 65536 –Force -Confirm:$false `
-UseLargeFRS
}
FormatDisk -driveletter D -drivelabel "DATABASE"
--
/* Veeam software enthusiast user & supporter ! */
/* Veeam software enthusiast user & supporter ! */
-
- Veteran
- Posts: 941
- Liked: 53 times
- Joined: Nov 05, 2009 12:24 pm
- Location: Sydney, NSW
- Contact:
[MERGED] Enabling Deduplication on Veeam Backup Repo NTFS ?
Hi All,
I'm about to turn on Windows Server 2012 R2 deduplication on the Veeam Backup server 2x iSCSI NTFS LUNs (each is 40 TB in size) so that I can run the deduplication job during the business hours, while after hours, the Veeam Backup job is running.
Does this issue with .VBK larger than 1 TB as described in: https://www.veeam.com/kb2023 still persist or it has been fixed in Veeam Backup & Replication 9.5 Update 2 ?
Any kind of help and suggestion would be greatly appreciated.
Thanks,
I'm about to turn on Windows Server 2012 R2 deduplication on the Veeam Backup server 2x iSCSI NTFS LUNs (each is 40 TB in size) so that I can run the deduplication job during the business hours, while after hours, the Veeam Backup job is running.
Does this issue with .VBK larger than 1 TB as described in: https://www.veeam.com/kb2023 still persist or it has been fixed in Veeam Backup & Replication 9.5 Update 2 ?
Any kind of help and suggestion would be greatly appreciated.
Thanks,
--
/* Veeam software enthusiast user & supporter ! */
/* Veeam software enthusiast user & supporter ! */
-
- Product Manager
- Posts: 5797
- Liked: 1215 times
- Joined: Jul 15, 2013 11:09 am
- Full Name: Niels Engelen
- Contact:
Re: Enabling Deduplication on Veeam Backup Repo NTFS ?
The 1TB limit is related to Windows 2012 R2 and not really to Veeam. It is advised to use per VM chain if you want to avoid bigger files then 1TB however big servers may easily create full backup files above 1TB.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
GitHub: https://github.com/nielsengelen
-
- Veteran
- Posts: 941
- Liked: 53 times
- Joined: Nov 05, 2009 12:24 pm
- Location: Sydney, NSW
- Contact:
Re: Enabling Deduplication on Veeam Backup Repo NTFS ?
Niels,
That's what concerns me because I've got multiple different SQL Servers that is 2-12.5 TB in size, so do I separate the backup job to be one VM per Veeam Backup Job ?
So is it advisable to enable the Windows Server 2012 R2 deduplication or not really if most of my Veeeam Backup job is larger than 2 TB ?
That's what concerns me because I've got multiple different SQL Servers that is 2-12.5 TB in size, so do I separate the backup job to be one VM per Veeam Backup Job ?
So is it advisable to enable the Windows Server 2012 R2 deduplication or not really if most of my Veeeam Backup job is larger than 2 TB ?
--
/* Veeam software enthusiast user & supporter ! */
/* Veeam software enthusiast user & supporter ! */
-
- Product Manager
- Posts: 5797
- Liked: 1215 times
- Joined: Jul 15, 2013 11:09 am
- Full Name: Niels Engelen
- Contact:
Re: Enabling Deduplication on Veeam Backup Repo NTFS ?
You can bundle multiple VM's in the job if you enable per vm backup chains however the default deduplication might become an issue for the large servers. Did you already check the other thread veeam-backup-replication-f2/best-practi ... 2-105.html ? There are some examples in there from customers with big VM's.
It might be better to look at Windows 2016 with ReFS if you are looking at space savings.
It might be better to look at Windows 2016 with ReFS if you are looking at space savings.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
GitHub: https://github.com/nielsengelen
-
- Veteran
- Posts: 941
- Liked: 53 times
- Joined: Nov 05, 2009 12:24 pm
- Location: Sydney, NSW
- Contact:
Re: Enabling Deduplication on Veeam Backup Repo NTFS ?
Hi Niels,
Thanks for the pointer and sharing for the potential issue.
As at the moment, I do not have the Windows Server 2016 license yet, hence I can only use 2012 R2.
So if that's the case I will just use Deduplication for all of my File Servers VM and not with the Veeam Backup Repository.
Thanks for the pointer and sharing for the potential issue.
As at the moment, I do not have the Windows Server 2016 license yet, hence I can only use 2012 R2.
So if that's the case I will just use Deduplication for all of my File Servers VM and not with the Veeam Backup Repository.
--
/* Veeam software enthusiast user & supporter ! */
/* Veeam software enthusiast user & supporter ! */
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Please note that this will affect the amount of changes copied during incremental job runs (unless you have BitLooker enabled).
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Hi Albert,
the 1TB limit seems to be pretty soft, we have 2012R2 enabled since forever and are quite happy with it. Never had a failed restore or surebackup job from it. Our largest VBK (VM) is around 6TB.
Important however is to use 64k clusters and format the volumes with /L, also never create a volume over 64TB since VSS doesn't support it (we use thin 62TB volumes to be safe). Currently around 4PB of Veeam backupdata combined over our repositories and 400TB on a 2016 Dedupe volume, actually 2016 dedupe seems to be a bit worse regarding actual space savings (performance got better due to multi threading howerver), i believe the reason is in 2016 only the first TB or first 2TB actually get deduped from a file (read that somewhere)
the 1TB limit seems to be pretty soft, we have 2012R2 enabled since forever and are quite happy with it. Never had a failed restore or surebackup job from it. Our largest VBK (VM) is around 6TB.
Important however is to use 64k clusters and format the volumes with /L, also never create a volume over 64TB since VSS doesn't support it (we use thin 62TB volumes to be safe). Currently around 4PB of Veeam backupdata combined over our repositories and 400TB on a 2016 Dedupe volume, actually 2016 dedupe seems to be a bit worse regarding actual space savings (performance got better due to multi threading howerver), i believe the reason is in 2016 only the first TB or first 2TB actually get deduped from a file (read that somewhere)
-
- Enthusiast
- Posts: 68
- Liked: 5 times
- Joined: Aug 28, 2015 12:40 pm
- Full Name: tntteam
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Hi,
I'm digging "old" thread just to add some infos.
For those saying to activate per-vm on deduped windows volumes, I would advice not to do so. I think windows get lost when there is too much files because I faced this problem and once it happens, any optimization job will faill with "exited unexpectedly" even after full GC and scrub jobs that went successfully.
On the other hand, never got any problem on another deduped volume which do not have per-vm backup file split, dedupe works great and my files are like 4TB.
Also about win2016, I don't think the story about only first TB processed is true, you can check yourself using Measure-DedupFileMetadata cmdlet, check DedupSize and DedupDistinctSize values, on a folder with multiple deduped files over 3, 4TB (cmdlet is slow to run, but it is supposed to count every block of every file in the folder, so...)
I'm digging "old" thread just to add some infos.
For those saying to activate per-vm on deduped windows volumes, I would advice not to do so. I think windows get lost when there is too much files because I faced this problem and once it happens, any optimization job will faill with "exited unexpectedly" even after full GC and scrub jobs that went successfully.
On the other hand, never got any problem on another deduped volume which do not have per-vm backup file split, dedupe works great and my files are like 4TB.
Also about win2016, I don't think the story about only first TB processed is true, you can check yourself using Measure-DedupFileMetadata cmdlet, check DedupSize and DedupDistinctSize values, on a folder with multiple deduped files over 3, 4TB (cmdlet is slow to run, but it is supposed to count every block of every file in the folder, so...)
Who is online
Users browsing this forum: Bing [Bot], Google [Bot], jsuh, Semrush [Bot] and 238 guests