Comprehensive data protection for all workloads
Post Reply
push3r
Enthusiast
Posts: 36
Liked: 6 times
Joined: May 17, 2013 11:54 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by push3r » Apr 14, 2015 4:27 pm

lightsout wrote:I turned off compression for my backup copy jobs too, as that goes to a HW dedup device.
But I am sending my Backup Copy Jobs across the WAN. Any suggestions?

lightsout
Expert
Posts: 218
Liked: 58 times
Joined: Apr 10, 2014 4:13 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by lightsout » Apr 14, 2015 4:30 pm

That is the downside in that it does create more LAN traffic. I do get higher dedup ratios on my storage without it though.

Given you've got WAN acceleration though, I would suspect leaving it off is ok.

veremin
Product Manager
Posts: 17047
Liked: 1470 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by veremin » Apr 14, 2015 5:21 pm

Should I use "None" for Backup Copy Job too?
Why not to leave compression as is, but enable "Decompress backup data blocks before storing" option in the setting of the given repository? This way the data will cross the wire in compressed state, and will be decompressed at the repository's site.

push3r
Enthusiast
Posts: 36
Liked: 6 times
Joined: May 17, 2013 11:54 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by push3r » Apr 14, 2015 7:58 pm

v.Ermin,

Thanks for the tip. I forgot about those settings in the Repository.

Is there some sort of best practice pdf from Veeam for stuff like this? Useful information scattered everywhere in the forum.

Shestakov
Veeam Software
Posts: 7039
Liked: 725 times
Joined: May 21, 2014 11:03 am
Full Name: Nikita Shestakov
Location: Prague
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by Shestakov » Apr 14, 2015 9:50 pm

push3r wrote:I am wondering, with 100Mbps WAN connection, Direct would be better? But Bandwidth saving is not as good as through WAN Accelerators? I don't have any Bandwidth Quotas limitation; so, it seems Direct would be better?
If you have Enterprise Plus Edition, I would suggest to use WAN Acceleration, it can give up 50x traffic savings, what`s especially useful for 100 Mbps bandwidth and big data transmission.
Thanks.

push3r
Enthusiast
Posts: 36
Liked: 6 times
Joined: May 17, 2013 11:54 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by push3r » Apr 14, 2015 11:17 pm

Yes, I have Ent Plus License. I did some more reading and saw that 100Mbps and below should use WAN Acceleration. Thanks.

conradblack
Enthusiast
Posts: 41
Liked: 2 times
Joined: Mar 19, 2015 1:54 pm
Contact:

[MERGED] Server 2012 dedupe - synthetic full/reverse incerme

Post by conradblack » Jun 22, 2015 4:07 pm

Heya,

I have some inquiries about using Windows Server 2012 deduplication alongside Veeam.

I've read Veeam documentation, and I am under the impression that it is recommended to not use reverse incremental/synthetic fulls with Windows deduplication. I'm curious about forever forward incremental. When the max amount of restore points is met, the oldest increment gets merged into the full vbk.. isn't this the same process as making a synthetic full? Is this process supported with dedupe volumes?

Also, using virtual full synthetic to tape, would I still be placed with these limitations? I'm assuming as much, since the files that are being synthesized into a full backup to tape are being read from a dedupe volume. I ran a few tests with synthetic full from a dedupe volume, while having "Transform previous backup chains into rollbacks" selected, and the performance was terrible. I'm guessing the performance will not improve that much without transforming the previous backup chain?

What is the best practice with Server 2012 dedupe and Veeam? I'd really like to use synthetic fulls in some capacity with Windows dedupe, whether it be reverse incremental or periodic synthetic fulls with forward incremental.

Thanks :)

foggy
Veeam Software
Posts: 18439
Liked: 1588 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by foggy » Jun 22, 2015 4:44 pm

Hello, I've merged your post into existing discussion with some hints/recommendation on using Windows deduplication on the backup repository. However, you definitely have to expect performance degradation during synthetic activity (think random I/O) on any deduplicating storage.

conradblack
Enthusiast
Posts: 41
Liked: 2 times
Joined: Mar 19, 2015 1:54 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by conradblack » Jun 22, 2015 4:48 pm

foggy wrote:Hello, I've merged your post into existing discussion with some hints/recommendation on using Windows deduplication on the backup repository. However, you definitely have to expect performance degradation during synthetic activity (think random I/O) on any deduplicating storage.
Thanks.

Does this apply as well to forever forward incremental when the oldest increment is merged into the full backup file?

foggy
Veeam Software
Posts: 18439
Liked: 1588 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by foggy » Jun 22, 2015 4:55 pm 1 person likes this post

Yes, though forever forward is less I/O intensive than reverse incremental, it still does synthetic activity during the merge phase.

conradblack
Enthusiast
Posts: 41
Liked: 2 times
Joined: Mar 19, 2015 1:54 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by conradblack » Jun 22, 2015 6:06 pm

foggy wrote:Yes, though forever forward is less I/O intensive than reverse incremental, it still does synthetic activity during the merge phase.
Thanks, that clears things up.

I should have asked last post, but would virtual full synthetic to tape be similar in I/O performance to regular synthetic fulls?

veremin
Product Manager
Posts: 17047
Liked: 1470 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by veremin » Jun 23, 2015 10:18 am

It should be similar in terms of I/O. Though, I don't really think you have to worry about that, because vsb is just a tiny file consisting of pointers.

namiko78
Expert
Posts: 117
Liked: 4 times
Joined: Mar 03, 2011 1:49 pm
Full Name: Steven Stirling
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by namiko78 » Dec 07, 2015 1:06 pm

Should i create my RAID with 64k block sizes as well, same as the file system? (default is 256k)

foggy
Veeam Software
Posts: 18439
Liked: 1588 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by foggy » Dec 07, 2015 4:13 pm

Steven, please review this discussion, should help.

bjdboyer
Enthusiast
Posts: 47
Liked: 2 times
Joined: Nov 16, 2015 5:52 pm
Full Name: Bill Boyer
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by bjdboyer » Jan 20, 2016 8:12 pm

Any updates on "lightsout" procedure with V9? Can you still use windows dedup with scale-out repositories?

Bill

lightsout
Expert
Posts: 218
Liked: 58 times
Joined: Apr 10, 2014 4:13 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by lightsout » Jan 20, 2016 8:34 pm

I'm upgrading too, but at least in concept, I would say yes. Bitlooker should be turned on though!

foggy
Veeam Software
Posts: 18439
Liked: 1588 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by foggy » Jan 21, 2016 12:10 pm

Currently there're no contraindications to using Windows deduplication on scale-out repository extents.

lightsout
Expert
Posts: 218
Liked: 58 times
Joined: Apr 10, 2014 4:13 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by lightsout » Jan 21, 2016 2:24 pm

I guess the other interesting one is per-VM VBKs. I don't see this increasing the de-dup ratio, but more smaller VBK is probably a better situation than fewer very large VBKs due to the processing time for each time.

asris
Novice
Posts: 6
Liked: 2 times
Joined: Jan 13, 2016 8:53 pm
Full Name: ASRIS
Contact:

[MERGED] v9 and Windows Repository with Dedupe enabled

Post by asris » Jan 27, 2016 5:07 pm

We're currently implementing v9. We have a Windows2KR2 proxy with a repository on a 2TB volume that is dedupe enabled.

When I create a backup job and point it to that repository can I just use the defaults settings? Or is there a preferred configuration for that type of repository?

foggy
Veeam Software
Posts: 18439
Liked: 1588 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by foggy » Jan 27, 2016 5:15 pm

You can find settings recommended for Windows dedupe repository in this thread.

asris
Novice
Posts: 6
Liked: 2 times
Joined: Jan 13, 2016 8:53 pm
Full Name: ASRIS
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by asris » Jan 27, 2016 11:02 pm 1 person likes this post

Thanks Foggy. I found the KB2023 which recommends formatting the volume with the /L switch and allocation size of 64K which is similar to what i've seen on this thread. I also don't see any mention of Storage compatibility settings or Backup job settings. For me, I only select to "decompress before storing". It would be great for Veeam to optimize the setting choices when adding a Windows dedupe repository.

Anyhow, our setup consists of backup jobs storing on a 2TB Windows volume with 7-14 restore points. I have a backup copy job that copies those backups to an onsite Data Domain. I have a another backup copy job that copies the backups to an offsite Data Domain.

So is it even worth it to enable the Deduplication on the Windows volume? My only concern is its affect on Instant VM recovery performance.

foggy
Veeam Software
Posts: 18439
Liked: 1588 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by foggy » Jan 28, 2016 12:11 pm

asris wrote:I also don't see any mention of Storage compatibility settings or Backup job settings.
You can find other recommended settings in this thread as well, for example, in this post.
asris wrote:So is it even worth it to enable the Deduplication on the Windows volume? My only concern is its affect on Instant VM recovery performance.
Indeed, having deduplication on the primary storage does not comply with our general recommendations.

Choodee
Influencer
Posts: 21
Liked: 3 times
Joined: Mar 03, 2012 4:30 pm
Full Name: Sandee Dela Cruz
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by Choodee » Feb 27, 2016 5:47 pm

Great thread. What is the recommended setting for creating a new backup repository on a windows server 2012 dedup volume? Should the options below be checked?

Align backup file data blocks
Decompress Backup Data Block before storing

Thanks

Gostev
SVP, Product Management
Posts: 25082
Liked: 3667 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by Gostev » Feb 27, 2016 7:39 pm

No, since Windows uses variable block size for dedupe.
Yes, otherwise you will get no dedupe at all.

Choodee
Influencer
Posts: 21
Liked: 3 times
Joined: Mar 03, 2012 4:30 pm
Full Name: Sandee Dela Cruz
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by Choodee » Feb 27, 2016 8:58 pm 3 people like this post

I just wanted to summarize what I found useful in this thread for others who are combing through. Thank you everyone for their contribution.

I read through the whole thing and summarized to the following important points.
veeam-backup-replication-f2/best-practi ... 2-165.html

Window Server 2012 Dedup Best Practice:

Format the Windows Server Volume by using 'Large File Records'
https://www.veeam.com/kb2023

1. It is important to format the volume with large NTFS File Record Segment (FRS) (4096 bytes instead of 1024 by default) as you could face NTFS limitation errors in future. You could verify FRS size with the following command:
fsutil fsinfo ntfsinfo <volume pathname>

Command to reformat NTFS volume with larger FRS (/L):
format <volume pathname> /L

2. Use maximum NTFS allocation unit size(cluster size) of 64Kb.
Considering option mentioned in pt. 1, two commands should be united into one:
format <volume pathname> /A:64K /L

3. Avoid growth of files more than 1TB;
4. For Backup jobs preferred backup method is Forward Incremental with Active Full backups enabled. (Forever Incremental will also work since synthetic fulls are automatically created when the last retention point is met and rolled up).

Veeam Settings:

Backup Repository:
No - Align backup file data blocks
Yes - Decompress Backup Data Block before storing

Backup/Backup Copy/Replication Jobs:
Enable inline data deduplication - Yes
Exclude swap file blocks from processing - Yes
​Compression Level - None
Storage Optimization - LAN target

Veeam Notes:
Format the disk using the command line "/L" for "large size file records".
Also format using 64KB cluster size.
Use Windows 2012 R2. Apply all patches as some rollups have improvements to dedup.
Use Active full jobs with Incrementals or Forever Incrementals.
Turn Veeam's compression to "None" and use the "LAN target" block size. Veeam's inline deduplication can stay on.
Windows Server Dedup Notes:
Modify the garbage collection schedule to run daily rather than weekly.
Try to keep your VBK files below 1TB in size - Microsoft doesn't official support files bigger than this. Large files take a long time to dedup and will have to be fully reprocessed if the process is interrupted.
Use multiple volumes, where possible. Windows dedup is single threaded, but it can process multiple volumes at once. Although bigger volumes mean better dedup ratios!
Configure your dedup process to run once a day, and for as long as possible.

HussainMahfood
Enthusiast
Posts: 35
Liked: 7 times
Joined: Jun 24, 2013 9:43 am
Full Name: Hussain Mahfood
Contact:

[MERGED] : MS Windows 2012R2 deduplication compression setti

Post by HussainMahfood » May 03, 2016 12:57 pm

Hi,

I have Dedupe-friendly compression in a daily job backup setting to MS WIndows 2012 R2 De-duplication as best practice.

My question if I change this to High compression would I still benefit from deduplcation or not?



Note: I know that High compression will effect 10x higher CPU usage which I do not care about

foggy
Veeam Software
Posts: 18439
Liked: 1588 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by foggy » May 03, 2016 1:06 pm

The higher compression level, the lower deduplication ratio you will get for the backup files.

veremin
Product Manager
Posts: 17047
Liked: 1470 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by veremin » May 03, 2016 1:08 pm

The deduplication will still be there, though its rates are likely to be affected negatively. So, I don't think it's worth it. Thanks.

lightsout
Expert
Posts: 218
Liked: 58 times
Joined: Apr 10, 2014 4:13 pm
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by lightsout » May 03, 2016 1:13 pm

Agreed. I think Windows' dedup does better without any compression from my testing.

HussainMahfood
Enthusiast
Posts: 35
Liked: 7 times
Joined: Jun 24, 2013 9:43 am
Full Name: Hussain Mahfood
Contact:

Re: Best Practice for MS Server 2012 DeDup Repo

Post by HussainMahfood » May 03, 2016 1:18 pm

As I would need to offload these files later to a tape backup.

if these files are uncompressed in the deduplication then will be offloaded uncompressed to backup to tape job

Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 27 guests