Comprehensive data protection for all workloads
Post Reply
Felix
Enthusiast
Posts: 37
Liked: 2 times
Joined: Oct 30, 2009 2:43 am
Full Name: Felix Buenemann
Contact:

Veeam Backup on ZFS (NexentaStor)

Post by Felix »

Hey,

I'm currently working on a project that will use NexentaStor / ZFS for primary VM and Backup storage.

The Backup storage will consist of 12x 2TB 7.2k SATA III drives in RAID10 (or ZFS load shared mirrors to be exact) and be attached to the ESXi 4.1 servers via 10GBe links. The appliance itself will be powered by 8x 2.4 GHz (+HT) and 54 GB RAM. The ESXi servers will each have 12x 2.66 GHz (+HT) and 96 GB RAM.

We already orderer Veeam Enterprise licenses, so we can leverage Veeam Backups full potential.

Now the interesting thing:
ZFS allows for Dedup and Compression inside the filesystem, for both block devices (called ZVOLS, used by iSCSI) or regular shares (NFS, SMB/CIFS).

Given that similar technologies are implemented in Veeam Backup itself, has anyone compared performance when using either of these features in Veeam vs. on the storage appliance?

Best Regards,
Felix Buenemann
Gostev
Chief Product Officer
Posts: 31754
Liked: 7259 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam Backup on ZFS (NexentaStor)

Post by Gostev »

Hi Felix, I've seen a tweet about this recently from another user. He found ZFS cannot really dedupe/compress beyond what we already do. However, theoretically ZFS should be able to dedupe between different backup files (if you are using multiple jobs). In any case, I would not recommend disabling Veeam dedupe and compression, because it is done on the source side, and thus positively affects processing performance and backup window. Thanks!
Felix
Enthusiast
Posts: 37
Liked: 2 times
Joined: Oct 30, 2009 2:43 am
Full Name: Felix Buenemann
Contact:

Re: Veeam Backup on ZFS (NexentaStor)

Post by Felix »

Thanks for the tips.

We'll also be using ZFS replication to do block level replication of the primary backup storage to a remote site.

In this scenarios, will it be helpful to use true incrementals instead of Veeam Backups default reverse-inrementals? Which scenario would cause less data changes on disk, so less replication traffic?
Gostev
Chief Product Officer
Posts: 31754
Liked: 7259 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam Backup on ZFS (NexentaStor)

Post by Gostev »

Default are regular (true) incrementals. For block level replication, least traffic can be achieved by using reversed incremental mode and 5.0.1 HF1 with special registry key that prevents renaming VBK file after each incremental pass. All of this can be obtained through support.

Regular incremental backup is second best option, it will produce less traffic on business days than above option (due to only having to shoot incremental file, versus reversed incremental file plus updates to VBK), but on the other hand it will produce more traffic during weekend because you will need to transfer new full backup file in its entirity. For some customers, this might be a better option (smaller traffic required on business days - which is when it really matters, at the price of more traffic on weekend - when it does not really matter).

Hope this helps.
SteveNeruda
Novice
Posts: 5
Liked: never
Joined: Mar 13, 2011 6:45 pm
Full Name: Steve Neruda
Contact:

Re: Veeam Backup on ZFS (NexentaStor)

Post by SteveNeruda »

Make sure you have alot of memory on the ZFS server if you attempt deduplication. If the dedupe table dosen't stay in memory you will see severe performance degradation.
Gostev
Chief Product Officer
Posts: 31754
Liked: 7259 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam Backup on ZFS (NexentaStor)

Post by Gostev »

Hi Steve, how much is "a lot"? What would you recommend? Thanks.
ccrichard
Enthusiast
Posts: 26
Liked: never
Joined: Sep 04, 2010 11:06 pm
Full Name: Richard Yamauchi
Contact:

Re: Veeam Backup on ZFS (NexentaStor)

Post by ccrichard »

Taken from this link:
http://hub.opensolaris.org/bin/view/Com ... +zfs/dedup

3. Make sure your system has enough memory to support dedup. Determine the memory requirements for deduplicating your data as follows:

A. Use the zdb -S ouput to determine the in-core dedup table requirements:

Each in-core dedup table entry is approximately 250 bytes
Multiply the number of allocated blocks times 250. For example:
in-core DDT size = 3.75M x 250 = 937.50M

B. Additional memory considerations from Roch's excellent blog:

20 TB of unique data stored in 128K records or more than 1TB of unique data in 8K records would require about 32 GB of physical memory. If you need to store more unique data than what these ratios provide, strongly consider allocating some large read optimized SSD to hold the deduplication table (DDT). The DDT lookups are small random I/Os that are well handled by current generation SSDs.
/////////////////////////////////////////////////
Personally I've done a little bit of testing with The Community edition of Nexenta. I have definitely not been running it in an optimized environment: it's run on Xen fully virtualized of a desktop with only 4GB or RAM. I'm using a Quantum Dxi4510 to backup to, but I'm trying to work the Nexenta as a Secondary and Archive system.
digitlman
Enthusiast
Posts: 94
Liked: 3 times
Joined: Jun 10, 2010 6:32 pm
Contact:

Re: Veeam Backup on ZFS (NexentaStor)

Post by digitlman »

Funny, I have been experimenting with an Openindiana box (OpenSolaris derivitive), which support ZFS de-dupe.

Transferring the same full VM backup up 5 times in a row, in which the data should roughly be the same, I'm seeing 1.01 dedupe ratio.

It looks like trying to dedupe a Veeam backup is a non-starter. Which is OK, since the file is already deduped.
ccrichard
Enthusiast
Posts: 26
Liked: never
Joined: Sep 04, 2010 11:06 pm
Full Name: Richard Yamauchi
Contact:

Re: Veeam Backup on ZFS (NexentaStor)

Post by ccrichard »

digitlman wrote:Funny, I have been experimenting with an Openindiana box (OpenSolaris derivitive), which support ZFS de-dupe.

Transferring the same full VM backup up 5 times in a row, in which the data should roughly be the same, I'm seeing 1.01 dedupe ratio.

It looks like trying to dedupe a Veeam backup is a non-starter. Which is OK, since the file is already deduped.
What you can do is experiment by turning off the Dedupe in Veeam and then manually calculating the difference in size that you save. Sometimes the ZFS statistics are not always updated. Turn off all compression from Veeam as well because that will almost always prevent ZFS from being able to deduplicate the data.

http://constantin.glez.de/blog/2010/03/ ... -need-know
digitlman
Enthusiast
Posts: 94
Liked: 3 times
Joined: Jun 10, 2010 6:32 pm
Contact:

Re: Veeam Backup on ZFS (NexentaStor)

Post by digitlman »

Should the deduplication ratio be around the same, whether the dedupe is being done through Veeam or ZFS?
Gostev
Chief Product Officer
Posts: 31754
Liked: 7259 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam Backup on ZFS (NexentaStor)

Post by Gostev »

No, not really.
1. Different block sizes (smaller block size = better deduplication).
2. Veeam also does good compression of deduped content (ZFS does not, or very basic - not sure).

We did a lot of testing on this matter, and found that dedupe with (relatively) large block size plus compression is a winning combination for an image-level backup application comparing to only compression, or only dedupe (with small block size). Results are vastly superior in terms of performance to compression ratio.
Post Reply

Who is online

Users browsing this forum: No registered users and 60 guests