Comprehensive data protection for all workloads
Post Reply
Vitaliy S.
VP, Product Management
Posts: 27371
Liked: 2799 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by Vitaliy S. »

Oh, ok! I would appreciate if you could update this topic with your findings (setting throughput duration to 0).
baatch
Enthusiast
Posts: 30
Liked: 4 times
Joined: May 16, 2013 12:52 am
Full Name: Alexander
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by baatch »

Just tested, throughput duration can only be set between 1-24 hours.
Vitaliy S.
VP, Product Management
Posts: 27371
Liked: 2799 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by Vitaliy S. »

Makes sense...
mongie
Expert
Posts: 152
Liked: 24 times
Joined: May 16, 2011 4:00 am
Full Name: Alex Macaronis
Location: Brisbane, Australia
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by mongie » 2 people like this post

Alexander - What I've done is... go into task manager. Export all the built in dedupe jobs (they're under microsoft / deduplication) and then disable them. Then I import the jobs I just exported, and you can customise them. The reason for the import/export is that the settings you change seem to be re-set after a while if you just edit the built in jobs.

I've turned off the "stop after x hours" setting, and that allows jobs to run for days / weeks if required.

Its really just running an executable file with some switches. You can also edit the switches to enable /full, /priority high, /backoff (which stops processing when it detects IO) and edit the memory usage target (I set mine to 75.

I hope this helps.
baatch
Enthusiast
Posts: 30
Liked: 4 times
Joined: May 16, 2013 12:52 am
Full Name: Alexander
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by baatch »

Thanks mongie for the tips, I'm going to try changing the schedule options through powershell instead commands and see if they still get re-set.

If not I will be doing your task scheduler hack. What does the /full switch do? Can't find anything on that.
mongie
Expert
Posts: 152
Liked: 24 times
Joined: May 16, 2011 4:00 am
Full Name: Alex Macaronis
Location: Brisbane, Australia
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by mongie »

Imagine /full being a full backup and not /full being an incremental.


I'm pretty sure thats the difference.
veremin
Product Manager
Posts: 20400
Liked: 2298 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by veremin »

I'm going to try changing the schedule options through powershell instead commands and see if they still get re-set.
There is also an article how deduplication schedule can be changed via PS and that is a nice place to start your acquaintance with deduplication PS commandlets.

Thanks.
mongie
Expert
Posts: 152
Liked: 24 times
Joined: May 16, 2011 4:00 am
Full Name: Alex Macaronis
Location: Brisbane, Australia
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by mongie »

A tip that I (novice at power shell) found was to use the | fl command after my command lets.

E.g. get-dedupestatus | fl

and

Get-dedupejob | fl
veremin
Product Manager
Posts: 20400
Liked: 2298 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by veremin »

A tip that I (novice at power shell) found was to use the | fl command after my command lets.
Yep, fl or Format-List is a nice commandlet to control the output of your main script.
baatch
Enthusiast
Posts: 30
Liked: 4 times
Joined: May 16, 2013 12:52 am
Full Name: Alexander
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by baatch »

Posted on MS forum about my dedup concerns on files larger than 2TB and the 24 hour limit on throughput scheduling and got this reply from development guys on the Server 2012 dedup feature: http://social.technet.microsoft.com/For ... 6d9604f06/

The number applies to how much can be deduplicated on a single volume.
If the machine has multiple cores and sufficient memory then Dedup will schedule jobs to run in parallel, one per volume, so the overall throughout of the machine could be 4x, 8x or even 16x than the number you quoted.
If you have a choice when you provision a machine intended for Dedup is to provision with multiple large volumes rather than one huge volume (e.g. create 16 4TB volumes each rather than a single 64TB volume), Dedup does not have at the moment the ability to run multiple jobs in parallel on the same volume.


And I don't know what to make of that reply really. Should I split my 16 TB volume to 4 volumes instead? How should I then configure my veeam job to land on the different volumes? My full backup is 3,5 TB.

So confusing...
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by tsightler »

Can you clarify what exactly you are attempting to accomplish? Is your backup 3.5TB with Veeam compression disabled? Are you using reverse incremental? How many restore points are you trying to keep on disk?

It's well known that the limitation of Windows 2012 dedupe processing is per-volume, so yes you could easy split your 16TB volume into 2x 8TB volumes and split you backups into two jobs that were roughly equal in size (1.75TB each), and you'd be good, but without knowing more about what exactly your goal is it's hard to decide if that's the best option as it might be that you'd be better off simply using reverse incremental backup and not using Windows 2012 dedupe at all.
baatch
Enthusiast
Posts: 30
Liked: 4 times
Joined: May 16, 2013 12:52 am
Full Name: Alexander
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by baatch »

I have veeam compression enabled and dedupe friendly option. My config looks like this:

1 veeam backup job, 56 vm, Forward incremental, active full every saturday, 30 restore points.
Full backup size has actually grown to 3,9 TB and incremental between 500-700GB.
16,4 TB Volume with Raid 5, 7 x 3TB 7,2K DAS on Physical server with Server 2012 Dedupe configured.
Right now I have only 27% dedup rate and 4,29 TB saved.

Any suggestion to what I should do differently?
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by tsightler »

If I had that configuration and was only interested in keeping 30 days of backups, I'd honestly probably just use reverse incremental and not use Windows 2012 dedupe at all. Veeam with normal compression will likely redeuce your backup to <2TB, and your incrementals to <300GB on average. That means that 30 days of backups would only need ~11TB anyway, and you said you have a 16TB volume, and that's assuming only a 2:1 compression ratio, most environments see closer to 3:1 which would out it closer to 9TB with Veeam alone.

On the other hand, if you just want to use Windows 2012 dedupe and forward incremental, it sounds likes it's already working reasonably well (it's saved 4.29TB). Do you have all 5 full backups yet? That's where most of the dedupe will come from. Dedupe friendly compression will reduce your dedupe ratios by quite a bit and normally I would suggest using no compression at all if you are interested in getting the absolute best dedupe ratios.
baatch
Enthusiast
Posts: 30
Liked: 4 times
Joined: May 16, 2013 12:52 am
Full Name: Alexander
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by baatch »

So change to Reverse incremental, turn on better Veeam compression and turn off Server 2012 dedupe would yield better storage savings for me?

I read that the reverse incremental will require 3 x IOPS requirement compared to forward incremental. I'm also worried about the backup window. Right now it is taking around 10-11 hours for a full. How long would a reverse incremental take? Also if I turn off Server 2012 dedupe, then my other files (backup for physical servers with backup exec) on the volume would not be able to dedupe.
baatch
Enthusiast
Posts: 30
Liked: 4 times
Joined: May 16, 2013 12:52 am
Full Name: Alexander
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by baatch »

On the compression level, is there any reason not to do Optimal or Extreme instead of dedupe friendly for Server 2012 dedup?
foggy
Veeam Software
Posts: 21138
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by foggy »

baatch wrote:On the compression level, is there any reason not to do Optimal or Extreme instead of dedupe friendly for Server 2012 dedup?
The only reason I guess is worse dedupe ratio you will get with these levels comparing to dedupe-friendly.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by tsightler »

You will get very little dedupe if you perform full compression level. "Dedupe friendly" compression is a very simple compression algorithm that reduces the data set size using a very simple algorithm this can still benefit at least partially from dedupe, however, advanced compression algorithms typically reduce dedupe down to near nothing unless you are storing the same compressed pattern multiple times.
norelco
Influencer
Posts: 14
Liked: never
Joined: Feb 08, 2013 9:42 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by norelco »

So... if we're best off using forward incremental then we're not having to do the disk IO associated with reverse incremental backups. It's recommended to use RAID10 arrays for backup repository (presumably so the disk IO is there to do reverse incrementals). If we're not going to use reverse incrementals, then would a RAID 5 array be considered acceptable?

n
foggy
Veeam Software
Posts: 21138
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by foggy »

norelco wrote:If we're not going to use reverse incrementals, then would a RAID 5 array be considered acceptable?
A couple of topics FYI regarding RAID recommendations, might be helpful:
RAID6 or RAID10 for backup storage on local disk array?
NAS RAID level
haribos
Novice
Posts: 3
Liked: never
Joined: Jul 31, 2013 8:15 am
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by haribos »

I'm testing with Server 2012 Dedup, the only 'issue' i experience is that the Synthetic Fulls takes extreme long time, because the required information is from the dedup data this is slower. What is the best whay staying on Synthetic Full or Active Full backup?
Vitaliy S.
VP, Product Management
Posts: 27371
Liked: 2799 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by Vitaliy S. »

If you can run your production VMs on a snapshot for the extended period of time, then I would suggest active full, in all other cases synthetic full should be a preferable choice because of the no impact on production storage/VMs.
depps
Enthusiast
Posts: 25
Liked: never
Joined: Jan 24, 2011 10:16 pm
Full Name: Daniel Epps
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by depps »

Has anyone used Server 2012 de-dupe attached to a QNAP?

I'm thinking of getting a 64TB QNAP and attaching it via iscsi or nfs to a physical or virtual machine and using this to store 5 years of monthly's and bi-weekly sql backups.
depps
Enthusiast
Posts: 25
Liked: never
Joined: Jan 24, 2011 10:16 pm
Full Name: Daniel Epps
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by depps »

Possibly looking at a Dell NX3200 or HP 1630 running Storage Server 2012 as well.
luckyinfil
Enthusiast
Posts: 91
Liked: 10 times
Joined: Aug 30, 2013 8:25 pm
Contact:

[MERGED] Windows 2012 dedupe design best practices

Post by luckyinfil »

I've been using windows 2012 dedupe along with veeam to backup our VMs, but what I'm noticing is that the dedupe fails to keep up with the data creation. For example, whenever my synthetics are created, there could be up to 5 TB of data created across a couple of volumes that day. Doing some research, it has been mentioned that dedupe only works at 100GB /hour MAX speed. Is there anyway to increase that rate provided we have more than sufficient resources?

What is the best practices to designing for large synthetic backups? (considering that best practices for local repositories are a maximum of 8TB for files)
veremin
Product Manager
Posts: 20400
Liked: 2298 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by veremin »

Doing some research, it has been mentioned that dedupe only works at 100GB /hour MAX speed. Is there anyway to increase that rate provided we have more than sufficient resources?
Hi, John.

As far as I know, this limitation exists on a volume level. Windows is able to deduplicate a single volume at a time and can process roughly about 100 GB of data per hour, or 2 terabytes (TB) per day. Even though Windows server with additional CPU and memory resources is able to deduplicate multiple volumes at the same time, the speed limitation of single deduplicated volume can’t be currently bypassed.

Thanks.
luckyinfil
Enthusiast
Posts: 91
Liked: 10 times
Joined: Aug 30, 2013 8:25 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by luckyinfil »

Do you have a source for that? Based on my observations, I see that there are multiple threads deduping on different volumes (i see multiple fsdmhost.exe processes working on different files in 2 different volumes in resource monitor).
veremin
Product Manager
Posts: 20400
Liked: 2298 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by veremin »

Yep, sure:
Data can be optimized at 20-35 MB/Sec within a single job, which comes out to about 100GB/hour for a single 2TB volume using a single CPU core and 1GB of free RAM. Multiple volumes can be processed in parallel if additional CPU, memory and disk resources are available.
lxzndr
Novice
Posts: 9
Liked: 2 times
Joined: Jun 24, 2011 3:26 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by lxzndr » 2 people like this post

I've been using 2012 dedup for about 6 months now (also using windows storage spaces).
relatively small shop. 24 VMs being backed up in veeam, 14 daily, 10 once a week. (Veeam 6.5, VMware 5.1)
While for many of you this is a tiny shop, for some (even large shops) it may provide some insight, particularly when trying to both keep a lot of restore points, and deal with the 2012 ingest rate issue. Do your backups to a non-dedup volume, and then copy them to the dedup volume for archival. you can continue your forward/reverse incrementals at normal speed, and get your longer term storage at the same time. You do lose the immediate access to the archived jobs from the Veeam GUI, but they are really there for infrequent use anyway. think of them as your old Tape copies. With this process you could even do a G/F/S process in dedup.

Here is my process:

All Veeam backups go to a non-dedup enabled volume on the server. I basically have one job per VM for backup, except for weekend, then the VMs that have minimally changed data are in one job.
I do a full once per week, which ones are done rotate. so VM 1+2 on Monday, 3+4 on tuesday, etc. that splits the volume down for the dedup.
Each day the jobs are finished by midnight. At 1am I kick off a script that does a full robocopy of the non-dedup volume to a external 4TB NAS device. When that finishes, it does a robocopy of the VBK files to the dedup volume, when that finishes it fires off a manual dedup job. (deduce file age=0 - immediate deduplication, disabled built in dedup schedule)
I get an e-mail after each robocopy, and then configured triggered task to e-mail dedup status after dedup jobs complete.

I have also created a script that will remove the archived jobs after they exceed a specified retention period. (haven't removed any yet)

I end up with 1 to 2 weeks (depending on VM) of live backup files - full + forward incremental (duplicated to external NAS) VBK sizes from 3GB up to 280GB. (~1.9TB of VBK files)
Then I have (currently) 6 months of VBK files in the dedup volume. I have expanded the capacity about 3 times in the 6 months. started at 3TB volume. FreeSpace fluctuates a bit. I only need about 400GB free due to staggering the full backups through the week.

Volume : E:
VolumeId : \\?\Volume
Capacity : 6.5 TB
FreeSpace : 607.24 GB
UsedSpace : 5.91 TB
UnoptimizedSize : 53.49 TB
SavedSpace : 47.58 TB
SavingsRate : 88 %
OptimizedFilesCount : 757
OptimizedFilesSize : 53.42 TB
OptimizedFilesSavingsRate : 89 %
InPolicyFilesCount : 757
InPolicyFilesSize : 53.42 TB
veremin
Product Manager
Posts: 20400
Liked: 2298 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by veremin »

@lxzndr

Have you considered an update to the latest product version (7.0) that has a new type of job called “backup copy job”?

I’m asking, since your deployment seems like a perfect use case of backup copy job. Not only will it handle for you the process of copying backup data to deduplicated volume, but also it will deal with retention period and GFS rotation scheme, as well.

Thanks.
VladV
Expert
Posts: 224
Liked: 25 times
Joined: Apr 30, 2013 7:38 am
Full Name: Vlad Valeriu Velciu
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by VladV »

We have a similar setup as lxzndr and was hoping that Backup copy will help us discard the scripting and robocopying solution.

Because of how Backup Copy is designed, you will always have the transformation processes which modify the VBK because of the minimum of 2 retention points and the weekly archives. This, in comparison to having only weekly VBKs on the dedupe volumes, has a lower dedupe performance . If Backup copy would have permitted the use of weekly backups (or other GFS periods) without the use of a permanent VBK/VIB combo it would have been ideal.
Post Reply

Who is online

Users browsing this forum: alex1992 and 137 guests