Comprehensive data protection for all workloads
Post Reply
scottmlew
Influencer
Posts: 22
Liked: 3 times
Joined: Jun 09, 2009 5:26 pm
Full Name: scottmlew
Contact:

surprising compression and dedup results

Post by scottmlew »

Hello,

I am evaluating the backup product to replace our current backup solution for our ESX servers. I am using network backup, and have seen some surprising results using "optimal" compression.

First, I tried backing up an Exchange server. It has an 8GB and a 30GB disk. There is 1.2GB of free space on the first disk, and 22.5GB free on the second disk. Yet, my backup size is 35GB -- why is this? I then did an incremental backup, less than 12 hours later, and the size of the incremental is 1.8GB (this is on a server doing email for only 4 people!) -- why is this?

In a separate trial, I backed up 2 Win2003 EE R2 servers as separate jobs. They are virtually identical, except one is configured for file sharing (with the file share volumes not selected for backup) and the other is configured as a certificate authority. The resulting backup sizes were 5.2GB each. I then did a job with both of these servers included, and the resulting size was 10.1GB -- why am I seeing so little dedup?

I know these are questions that probably require detailed analysis of my environment and machines to diagnose, but I was hoping for some general comments so that I can decide if we should pursue our evaluation of this backup product.

Thank you in advance!
Gostev
Chief Product Officer
Posts: 31709
Liked: 7215 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: surprising compression and dedup results

Post by Gostev »

Hello Scott,

Re: Exchange
Optimal compression can give you good compression ratios only for newly created VM, or if your VM disks have been optimized with defrag&wipe procedure. From the results you are getting, it is clear that your disks do not have any white space (zeroed blocks). All disk sectors contained data at some point, and as you know deleting the file does not actually delete data from disk (this is why you can always UNdelete the file). So, all 35GB of your disk still contain some data. You should either use Best compression for this VM, or optimize it as described in the link above (which is a good thing to do anyway).

Re: Deduplication
From your results it looks like your VMs were either not made from the same template, or one of it had its disks content heavily modified after creation. Otherwise, you would see much better deduplication ratios.

Hope this helps.
scottmlew
Influencer
Posts: 22
Liked: 3 times
Joined: Jun 09, 2009 5:26 pm
Full Name: scottmlew
Contact:

Re: surprising compression and dedup results

Post by scottmlew »

Thanks for the speedy reply.

re: Exchange -- I was wondering about the free space, and if the issue was that there was data in the blocks at one point (which there most definitely was)...I wasn't sure if the product did something really slick like look at the filesystem to determine used vs unused blocks. I am going to experiment with the defrag and wipe and see how that affects my results. I'm still a bit puzzled by the large incremental backup, though.

re: dedup on the other 2 machines, they were definitely made from the same template, but it's entirely possible that they have had significant mods done to them which caused the block contents to diverge a lot. With that said, since they're both running fully patched versions of Win2003, I'd expect there to be more common blocks....if they have the same block size, the blocks holding any given file have to be the same, don't they? (ignoring blocks allocated to small files, which also contain NTFS metadata, iirc).
Gostev
Chief Product Officer
Posts: 31709
Liked: 7215 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: surprising compression and dedup results

Post by Gostev »

Incremental backup sizes will be much smaller after defragmentation as well, since right now any change to disk get scattered across multiple block due to fragmentation, and our engine has to pickup all those changed blocks during the incremental pass. After defragmentation, disk changes should be much more physically "consolidated", and will touch a few times less data blocks. Just make sure to create new job for testing after defrag&wipe procedure instead of continuing old one (or it will pick all those changes made by defrag&wipe)

As for dedupe, you should just try this again with some freshly created from template VMs. I just did similar test recently (created 2 VMs from the same template, and installed a few different apps), and my results were great (I can lookup exact numbers tomorrow).
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Google [Bot], Semrush [Bot] and 35 guests