Comprehensive data protection for all workloads
Post Reply
dualdj1
Enthusiast
Posts: 47
Liked: 4 times
Joined: Feb 05, 2013 6:56 pm
Full Name: Jason K. Brandt
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by dualdj1 »

VladV wrote:We have a similar setup as lxzndr and was hoping that Backup copy will help us discard the scripting and robocopying solution.

Because of how Backup Copy is designed, you will always have the transformation processes which modify the VBK because of the minimum of 2 retention points and the weekly archives. This, in comparison to having only weekly VBKs on the dedupe volumes, has a lower dedupe performance . If Backup copy would have permitted the use of weekly backups (or other GFS periods) without the use of a permanent VBK/VIB combo it would have been ideal.
I too, would like to see the daily vib's as being optional. I am using the same physical SAN for my dailies and archive, so if I experience a failure, having them archived won't help (I backup the dailies up with tape job).
luckyinfil
Enthusiast
Posts: 91
Liked: 10 times
Joined: Aug 30, 2013 8:25 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by luckyinfil »

Has anyone successfully configured dedupe for backup vbk files that are fairly large? I'm talking 3-4 TB each? What I'm noticing is that the dedup engine doesn't seem to work for these large files.

Veeam recommends that you can have up to 8 TB of VMs backed up per job as long as the repository is local. This would translate to 3-4 TB vbk files.

Are there specific settings to configure for these large files?
Vitaliy S.
VP, Product Management
Posts: 27377
Liked: 2800 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by Vitaliy S. »

I don't believe there is a magic trick for that, since the dedupe rate is limited by the engine itself. Can you please clarify what do you mean by saying "doesn't seem to work"?
luckyinfil
Enthusiast
Posts: 91
Liked: 10 times
Joined: Aug 30, 2013 8:25 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by luckyinfil »

As in anytime a volume that has a large file (3-4 TB in my experience), running the dedupe jobs do nothing. Even manually running them at high priority does nothing.
Vitaliy S.
VP, Product Management
Posts: 27377
Liked: 2800 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by Vitaliy S. »

Ok, thanks for clarification, let's see what our community members can share on this.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by tsightler »

The only way to scale Windows 2012 dedupe is by having multiple volumes. I'd suggest splitting your backups to three or four smaller repositories and keeping the backup sizes for the fulls less than about 1TB each.
luckyinfil
Enthusiast
Posts: 91
Liked: 10 times
Joined: Aug 30, 2013 8:25 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by luckyinfil »

That's what it seems like you'd have to do. It seems like 2012 dedupe really has issues with files larger than 2 TB. Perhaps I'll try setting the Veeam backup job to WAN storage dedupe and hope for smaller vbk files. If not, I'll have to create smaller jobs. The problem with that is that you don't know how big the vbk files are until you run a full backup which makes it a trial and error process.

Is there official world on how large your backup jobs should be in terms of:
a) number of VMs
b) size of backup job
VladV
Expert
Posts: 224
Liked: 25 times
Joined: Apr 30, 2013 7:38 am
Full Name: Vlad Valeriu Velciu
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by VladV »

From what I've read regarding dedupe best practices it seems that any files beyond 1TB aren't recommended.

I have a 1.8TB VM which after a full backup occupies about 1.4TB on a VBK from which I managed to squeez 8 VBKs on a 3.7TB dedupe volume. Processing each VBK takes about 2 days. I copy one VBK per week and run a throughput optimization high priority 70% mem dedupe job on a 20 GB ram server. From what I see, the job never goes over 8GB occupied ram which is under 50% mem and which makes me believe that their recommendation to keep files under 1TB stands. Unfortunately the job performance doesn't scale further.

Don't forget to run garbage collection jobs especially when you interrupt an ongoing optimization job otherwise you won't see the savings.
mgd5
Enthusiast
Posts: 28
Liked: 2 times
Joined: Oct 05, 2012 6:36 am
Full Name: Christer Sundqvist
Contact:

[MERGED] : Veeam & Win2012 dedup

Post by mgd5 »

Hi,

We have been using Veeam and Win2012 with dedup for a couple of weeks. The dedup ratio isnt as good as i thought it would be, one reason could be that we pushed in to much data and windows have been busy since then.

I have a question regarding backup and dedup. Right now we are backing a few jobs that are big in size, around 1-5 TB per job. is it a good idé to split thoose in to smaller jobs for dedup savings?

Big files coming into dedup or lot´s of smaller ones, meaning more backup jobs?

Regards

Christer
veremin
Product Manager
Posts: 20415
Liked: 2302 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by veremin »

Hi, Christer,

For general considerations regarding Veeam and Windows 2012, please, take a look at the answers provided above. Also, please, bear in mind that Windows 2012 can deduplicate a single volume at the speed of 100 GB per hour, or 2 terabytes (TB) per day.

Thanks.
luckyinfil
Enthusiast
Posts: 91
Liked: 10 times
Joined: Aug 30, 2013 8:25 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by luckyinfil »

I believe that 100GB/hour dedupe figure is per volume, to a maximum of 2 volumes. Having more volumes will create more parallel dedupe jobs, but I've only noticed 2 jobs actually running concurrently.
luckyinfil
Enthusiast
Posts: 91
Liked: 10 times
Joined: Aug 30, 2013 8:25 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by luckyinfil »

luckyinfil wrote:I believe that 100GB/hour dedupe figure is per volume, to a maximum of 2 volumes. Having more volumes will create more parallel dedupe jobs, but I've only noticed 2 jobs actually running concurrently.
Just wanted to edit my initial statement: windows server 2012 dedupe is actually able to dedupe more than 2 volumes concurrently. Not sure why my third volume was not deduping originally but it is now.
veremin
Product Manager
Posts: 20415
Liked: 2302 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by veremin »

Yep, you’re correct here. As mentioned previously, you can deduplicate concurrently whatever number of volumes you want to, as long as Windows Server 2012 isn’t deprived of CPU and memory. Though, it’s not possible to bypass per volume speed limit (2Tb/day).

Thanks.
lando_uk
Veteran
Posts: 385
Liked: 39 times
Joined: Oct 17, 2013 10:02 am
Full Name: Mark
Location: UK
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by lando_uk »

FYI

I've just finished a test dedupe of a copy of one of our jobs, 15TB (approx 140 files containing, 1TB VBK's with 150GB VIBS) - It took about 8 days and the dedupe ration was 56% saving a total of 9TB. This wasn't a dedupe friendly copy, it was using extreme compression so I'm hoping for better than 56% on the real setup.

I used %systemroot%\system32\ddpcli.exe enqueue /opt /scheduled /vol * /priority high /memory 75

I'm wondering if Server 2012 R2 has improved dedupe performance. I've tested it on another box, but not with the same 15TB copy (no time) and it still only used 1 cpu per volume, so I'm not holding much hope that it will processes much faster.
yvang
Lurker
Posts: 2
Liked: 1 time
Joined: Oct 26, 2012 4:38 pm
Full Name: Ying Vang
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by yvang » 1 person likes this post

I've opted to use Powershell to create and maintain my jobs to keep the VBK files around 400 GB and the VIB files around 50 GB. I also take into account that the total size of all the full backup jobs when added together per LUN is around 4 TB. This allows time to complete the dedup process for all the full backups and still have enough time to catch up on deduping the incremental backups before the next set of full backup hits on the LUN. Remember to account for a landing zone to be at least the same size of your total FULL backups and leave at least 1 TB for the dedup process.

Dedup configuration on Windows 2012
- Zero Day dedup
- individual LUN size 12TB
- 24 GB of memory on Server
- 16 cores
- Direct SAN connectivity ESXi LUNs (around 60 LUNs and set to read-only)
- Direct SAN connectivity to Backup LUNs

Veeam Configuration for each job:
- Compression off
- Veeam Dedup on
- Incremental backup with a Full on Friday
- Run up to 14 backup jobs simultaneously on the Veeam proxy/repository server

LUN 1 stats:
Capacity : 13194003210240
FreeSpace : 3373252694016
InPolicyFilesCount : 320
InPolicyFilesSize : 28476361604658
OptimizedFilesCount : 251
OptimizedFilesSavingsRate : 87
OptimizedFilesSize : 20377713161122
SavedSpace : 17871591794297
SavingsRate : 64
UnoptimizedSize : 27692342310521
UsedSpace : 9820750516224


LUN 2 Stats:
Capacity : 13194003210240
FreeSpace : 5953303379968
InPolicyFilesCount : 270
InPolicyFilesSize : 25080825008499
OptimizedFilesCount : 247
OptimizedFilesSavingsRate : 80
OptimizedFilesSize : 20344239894899
SavedSpace : 16415830518141
SavingsRate : 69
UnoptimizedSize : 23656530348413
UsedSpace : 7240699830272

LUN 3 Stats:
Capacity : 13194003210240
FreeSpace : 11713857712128
InPolicyFilesCount : 123
InPolicyFilesSize : 12098731691186
OptimizedFilesCount : 123
OptimizedFilesSavingsRate : 88
OptimizedFilesSize : 12098731691186
SavedSpace : 10665404968117
SavingsRate : 87
UnoptimizedSize : 12145550466229
UsedSpace : 1480145498112

LUN 4 Stats:
Capacity : 13194004246528
FreeSpace : 6616882970624
InPolicyFilesCount : 253
InPolicyFilesSize : 25994694624285
OptimizedFilesCount : 247
OptimizedFilesSavingsRate : 83
OptimizedFilesSize : 23276211524637
SavedSpace : 19541507141612
SavingsRate : 74
UnoptimizedSize : 26118628417516
UsedSpace : 6577121275904
luckyinfil
Enthusiast
Posts: 91
Liked: 10 times
Joined: Aug 30, 2013 8:25 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by luckyinfil »

How can you guarantee that your vbk's are under a certain size when creating veeam backup jobs? You have no indication of how much the files dedupe
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by Gostev »

To be on the safe side when estimating, I always use 50% as a rule of thumb for data reduction ratio. It can be much more depending on the workload, but you will rarely see data reduction ratios less than that.
dledoux
Lurker
Posts: 2
Liked: never
Joined: Mar 08, 2013 7:20 pm
Full Name: Hvguy
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by dledoux »

Anyone know if Server 2012 R2 still dedupes at one thread/core? I've considered using Starwinds free iscsi san for hyperv because it has dedupe as well rather than the native dedup in server 2012+ but haven't tried it yet. I'm sure there is a performance hit as starwind I think does "in-line" deupe rather than a dedupe "crawl" like 2012. Anyone else tried that yet?
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by Gostev »

If you are looking at software-based deduplicating storage alternatives, you may also want to evaluate HP StoreOnce VSA. We had quite a good experience with its hardware version.
VladV
Expert
Posts: 224
Liked: 25 times
Joined: Apr 30, 2013 7:38 am
Full Name: Vlad Valeriu Velciu
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by VladV » 1 person likes this post

dledoux wrote:Anyone know if Server 2012 R2 still dedupes at one thread/core? I've considered using Starwinds free iscsi san for hyperv because it has dedupe as well rather than the native dedup in server 2012+ but haven't tried it yet. I'm sure there is a performance hit as starwind I think does "in-line" deupe rather than a dedupe "crawl" like 2012. Anyone else tried that yet?
From what I can tell, in my case comparing to 2012, the dedupe processing rate increased by 3-4 times reaching 6,5TB/day. I may be limited by storage but still it's quite an increase in performance and I am now being able to dedup all the full backups from one week in 7 hours comparing to 2 days with 2012.
5150cd
Influencer
Posts: 11
Liked: never
Joined: Oct 19, 2011 4:44 pm
Contact:

[MERGED] A few questions on Server 2012 Dedupe

Post by 5150cd »

I've been playing around with Veeam v.7 and Server 2012 deduplication. A few questions have popped up that I haven't been able to find answers for:

Is there any benefit to having one VM per job vs multiple VM's per job? I know that I need to use dedupe-friendly compression on my jobs.

Should I disable "inline data deduplication" on my backup jobs that Server 2012 will dedupe?

I know that I need to use incremental backups, but I'm curious how people are handling full backups. Are you using synthetic fulls? If so, are you having Server 2012 only dedupe files older than 7 days or do you just do 0 days?

Thanks for any information!
5150cd
Influencer
Posts: 11
Liked: never
Joined: Oct 19, 2011 4:44 pm
Contact:

Re: A few questions on Server 2012 Dedupe

Post by 5150cd »

Also, is it worth enabling "Align backup file data blocks" and/or "Decompress backup data blocks before storing" on the Server 2012 repository?
yizhar
Service Provider
Posts: 182
Liked: 48 times
Joined: Sep 03, 2012 5:28 am
Full Name: Yizhar Hurwitz
Contact:

Re: A few questions on Server 2012 Dedupe

Post by yizhar » 1 person likes this post

Hi.

First - I have no experience with 2012 dedup yet.
However I can respond to some of the questions:

> Is there any benefit to having one VM per job vs multiple VM's per job?
Yes, there are some benefits for several jobs (hence several VBK files) vs one single job.
This doesn't mean that you should have one VM per job, it depends on several factors.
I recommend using 1 VM per job for large VMs (file server, mail server, etc), and several VMs per job grouped by role/tenant/ or whatever parameters best fits your needs and preferences.

Regarding dedup - Windows 2012 dedup scans and dedup by files, and then by block. Having several smaller files vs one single VBK seems better approach to me.

There are other advantages for several VBK files and jobs vs one single large one, you can easily launch a specific job (for example, just before installing service pack on the large mail server) without waiting for other VMs.
There is easier mobility of VBK files.

> Should I disable "inline data deduplication" on my backup jobs that Server 2012 will dedupe?
No, as this will create larger VBK files. But you can use "LAN Target" with relatively large blocks for Veeam Dedup, and win 2012 will further dedup the files using variable blocks and and global deduplication across VBK files.

One more tip:
If applicable, you can have 2 volumes (repositories).
Primary fast repository without dedup - store current (last week or 2 worth) backups there.
Secondary repository with dedup for archives - use "backup copy job" or other method to store weekly/monthly backup copies for long term retention such as GFS .
If possible - use different physical disks for each repository.

Yizhar
Vitaliy S.
VP, Product Management
Posts: 27377
Liked: 2800 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by Vitaliy S. »

Hello,

1. Having one VM per job is not really the best practice. We recommend to add multiple VMs to the job, unless there is really a need in a single VM per job configuration.
2. I wouldn't recommend disabling Veeam deduplication, as backup files will take more space on the repository. Windows server is still capable to dedupe files created by your backup jobs.
3. Please review this topic for details on full backups handing.
4. Decompressing backup files before writing them to the target storage will give you better dedupe ratio, but even with compression enabled Windows Server 2012 will dedupe Veeam backup files.

Thanks!
yvang
Lurker
Posts: 2
Liked: 1 time
Joined: Oct 26, 2012 4:38 pm
Full Name: Ying Vang
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by yvang »

Has anyone been able to get more then 4 concurrent dedup optimization jobs running? I have not, but based on the Microsoft's document http://technet.microsoft.com/en-us/libr ... 31700.aspx I should be able to with the my current hardware setup:

Dell PowerEdge R720
16 Cores (excluded hyperthreading already)
24 GB Mem
5 SAN connection volumes

Thing's I've tried but didn't help:

Optimizing scheduled dedup jobs - set 1:
set-dedupschedule -Name "BackgroundOptimization" -Type Optimization -Memory 10
set-dedupschedule -Name "ThroughputOptimization" -Type Optimization -Memory 10

Optimizing scheduled dedup jobs - set 2:
set-dedupschedule -Name "BackgroundOptimization" -Type Optimization -Memory 75
set-dedupschedule -Name "ThroughputOptimization" -Type Optimization -Memory 75

Optimizing scheduled dedup jobs - set 3:
set-dedupschedule -Name "BackgroundOptimization" -Type Optimization -Memory 25
set-dedupschedule -Name "ThroughputOptimization" -Type Optimization -Memory 25


I also open a low level ticket with Microsoft, but due to the lack of knowledge on the dedup process with the tech I was working with, I closed the case. (The Microsoft tech was only able to run two currently dedup optimization jobs in her lab, but wanted me to do more testing on my side.)

If you are able to get more than 4 concurrent jobs, please let me know how you are doing this. I like to stick with one or two physical Veeam repository/proxy server per Datacenter rather than going to one virtual Veeam repository/proxy servers per ESX clusters.

Thanks
kte
Expert
Posts: 179
Liked: 8 times
Joined: Jul 02, 2013 7:48 pm
Full Name: Koen Teugels
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by kte »

i have a 64 TB lun with 35 TB used i would like to dedup my GFS copy VBK files of my copy jobs in windows 2012 R2
But after activating the dedup on that volume i get vss issue incorrect parameter and i have vbk's of 2,3TB files and 6TB vbk
any ideas ?

k
paps40
Enthusiast
Posts: 31
Liked: 13 times
Joined: Dec 12, 2011 4:10 pm
Full Name: Peter Pappas
Contact:

[MERGED] Veeam 7 R2 & Server 2012 Deduplication

Post by paps40 »

I had a question on Veeam 7 R2 & Server 2012 Deduplication. I have been running Veeam 7 R2 on a new physical Dell R720xd for 2 months now and am at the point where I need to enable deduplication for backup files older than 7 days.

My veeam jobs were setup to best practices from info I read on the forums (Incrementals Mon to Fri with Active Fulls on weekends, Check - Enable inline data deduplication, Compression Level = None, Storage Optimization = LAN Target)
4 Veeam jobs were setup by Server OS.
Server has 36 TB of Local NL-SAS Storage
4 Veeam Repositories have been setup for each job. They contain 8 TB of disk for each repository so 32 TB total allocated to veeam. Repositories were setup like this to run server 2012 dedupe jobs in parallel.

Questions
1. What will happen if I run the dedupe jobs while the veeam jobs are running? Veeam support (Case # 500255) said it could cause corruption when I go to restore data but they were going to clarify with level 2 and get back to me. I was planning on letting the dudupe jobs run continuously by using the Enable background optimization default setting when enabling dedupe. With 8 TB in each repository and with server 2012 only processing 100GB per hour on avg it might take 3.5 days to dedupe the entire volume. Obviously we can't stop backups for 3 days to let the dudupe finish. Does anyone have any thoughts or experience with this?


2. Does anyone know if you schedule dedupe for Enable Throughput Optimization over the weekend (Sat / Sun , 24 hours each day) and the job doesn't finish will it pick up where it left off? I couldn't find an exact answer on technet but I'm assuming that it will start over next weekend and potentially never finish as it doesn't have enough time. Does anyone know the exactly how this works?


3. What happens when I go to restore a backup that server 2012 deduplicated? Is the restore seamless? On a modern server how much longer would the restore take while veeam waits for the data to be re-hydrated.

Thanks for your help.
Vitaliy S.
VP, Product Management
Posts: 27377
Liked: 2800 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by Vitaliy S. »

1. According to the recommendations from this topic, it's better to have different schedule for backup and dedupe jobs.
2. Couldn't find the answer to this question either, but I assume to dedupe the entire volume, it's better to use BackgroundOptimization option that can be paused when system is getting busy and can be resumed when resources become available.
3. Yes, it is. To run a restore from a deduped volume you should enable this feature on the backup server.
paps40
Enthusiast
Posts: 31
Liked: 13 times
Joined: Dec 12, 2011 4:10 pm
Full Name: Peter Pappas
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by paps40 »

I have a few more deduplication questions for the forum.

1. Does anyone know if a veeam full backup file over 1TB will ever be deduped? For example, I have a weekly full job with 20 vm's that is 2.2 TB. Dedup ran for 2 days straight 48 hours and it only processed 1 incremental file which it deduped from 60 GB to 4 KB. Only 1 file has the attributed changed to the dedupe attribute of APL. The job still shows the dedup job is at 0%. I know MS recommends nothing over 1 TB but I still thought it would work. Has anyone seen this work? I might have to break the job into 2 smaller jobs if it doesn't work.

2. Has anyone noticed that dedupe doesn't run well on server 2012 backups? For example, my server 2008 backup repository dedupe rate was 60% but my server 2012 backup repository dedup rate was only 12%. What has changed to cause this? I assumed that 2012 veeam backups would get better dedupe than 2008 veeam backups.
paps40
Enthusiast
Posts: 31
Liked: 13 times
Joined: Dec 12, 2011 4:10 pm
Full Name: Peter Pappas
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by paps40 » 6 people like this post

I wanted to answer my own questions for those who might also be having the same issues and post some of my findings with server 2012 dedup.

1. Server 2012 Dedup will eventually dedup a 2.2 TB .vbk file. In my case it took almost 3 days even with 75% RAM allocated to the job. It duduped the 2.2 TB file to 804 GB. Dedup speed was approximately 40 GB or so per hour. It is much faster on smaller .vbk files under 1 TB approx 100 GB per hour. It deduped a Server2008 active full Backup job from 573GB to 131 GB. After 2 more active fulls, it was then able to dedupe new active fulls from 573 GB to 4 KB.

Makes sense why microsoft mentioned that "Files approaching or larger than 1 TB in size" are not good candidates for deduplication.
http://technet.microsoft.com/en-us/libr ... BKMK_Step1

2. Server 2012 dedup starts at the oldest file and works it's way up. If you add the attributes column in explorer you can see it changes a deduped veeam backup file from attribute A to APL.

3. Server 2012 dedup will pick up where it left off. I have started and stopped jobs only to notice that they do not start over from the beginning. They pickup where they left off which is a good thing on large volumes where you need dedup to stop when the veeam backup jobs are running.

4. Best way to schedule dedup is to export current default jobs from Task Scheduler (Task Scheduler Library < Microsoft < Windows < Deduplication), Rename Them, Import Them, Tweak Them, and then disable the default jobs. This gives you much more control over scheduling certain volumes and allocating more RAM to the job. You can specify the volume by deleting the * and replacing it with E:, F:, etc
Add Arguments = enqueue /opt /scheduled /vol * /priority high /memory 75

Forum posting that talks about this
http://forums.veeam.com/posting.php?mod ... =2&p=79089

5. Based off of the post below from Yizhar, I found the best way to handle dedupe was to carve up 2 big volumes. Volume 1 for a Hot Landing zone for recent backups (7 Days or so) and Volume 2 for long term archiving. The long term archive zone can be deduped via schedule to run 24/7 and use 50% of server ram. It should be able to dedup 8.4 TB per week using a rate of 50GB per hour. Dedup is much slower on .vbk's over 1 TB. By using this method, you don't have to break your backup jobs into smaller jobs and you will get global deduplication across VBK files. We have our backups jobs setup by Server OS.

Forum posting that talks about this
http://forums.veeam.com/posting.php?mod ... =2&p=93524

6. Based off forum research / suggestions and conversations with Veeam Tech Support (Case # 440680) the most optimal veeam backup jobs settings when using server 2012 dedup I found were the following:
6a. Check - enable inline data dedupe
6b. Compression Level = None
6c. Storage Optimization = LAN Target (uses 512 KB Blocks, smaller blocks = better dedupe) - We are backing up to a physical veeam server with local storage in my situation.
6d. Backup Mode = Incremental Mon - Thurs with Active Fulls on Fri. (Safest way in my opinion in case of synthetic full corruption)

Forum posting that talks about this
http://forums.veeam.com/posting.php?mod ... =2&p=73493
Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 39 guests