Comprehensive data protection for all workloads
Post Reply
rntguy
Enthusiast
Posts: 82
Liked: 1 time
Joined: Jan 29, 2016 8:31 pm
Full Name: Rnt Guy
Contact:

Recommended settings for least used space and bandwidth

Post by rntguy »

We have a DD that is offsite (managed cloud connect). I was reading up on the recommended settings for configuring copy job settings going a DD and one of them is NOT checking enable inline deduplication, presumably because the DD does better with it's own deduplication metrics.

However, the DD has the box checked to 'decompress data upon arrival' - which as I understand it is supposed to put the blocks back in full raw format before the DD does anything with it - including it's own dedupe and compression. Is this accurate?

It would seem if the bandwidth link is a limiting factor then we'd want to enable that inline dedupe box so that it doesn't send as much data over the wire as it would without it checked, and then veeam would decompress it and make it raw format before the DD did it's own work.

I'm on 9.5 U2 in case that matters. Cloud is at latest. posting here because there's more traction on this side. same question could exist if DD was on limited link intranet I suppose.

we want whatever uses the least space on the DD ultimately and whatever takes least bandwidth. cpu hit is not an issue, even at extreme compression setting for the local backup job. we'd like to keep the copy and backup jobs at the same compression rate so it doesn't have to re-package it before sending it out.

our backup job is set to extreme compression due to limited space on the repository but plenty of CPU resources and being OK with restore times being longer. It's also set to WAN target since going offsite to a DD in cloud connect, instead of LAN 16+
Andreas Neufert
VP, Product Management
Posts: 7081
Liked: 1511 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Recommended settings for least used space and bandwidth

Post by Andreas Neufert »

To disable inline dedup will enable optimizations at restore for metadata handling (caching instead of random read). Our dedup would no affect overall data reduction in the whole process.
Compression should be disabled to allow datadomain dedup to work.

So on the Job level use Local 16TB as size. And use all best practices settings for your primary backup.

On BCJ Target as Datadomain (DDBoost).
Repository (datadomain)
- Enable Uncompress

On the BCJ
Enable Compression (Auto) (repository will later uncompress it)
Disable Inline Dedup

Use WAN Accelerators if you like to reduce data
rntguy
Enthusiast
Posts: 82
Liked: 1 time
Joined: Jan 29, 2016 8:31 pm
Full Name: Rnt Guy
Contact:

Re: Recommended settings for least used space and bandwidth

Post by rntguy »

I'm confused. Why not use extreme compression if the BCJ target will uncompress upon arrival and then use it's own compression and dedupe. the goal is to send as little data over the wire so it goes faster and then later let the DD compress it after veeam uncompresses it.

also, why use Local 16TB if it's going over the WAN which is recommended when sending offsite?

the goal isn't the fastest restore in this case, but the fastest copy offsite and the greatest dedupe and compression by the DD.
Andreas Neufert
VP, Product Management
Posts: 7081
Liked: 1511 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Recommended settings for least used space and bandwidth

Post by Andreas Neufert »

You can use Extreme Compression for the WAN link if you like.
16TB is the recommended file size to work with the Datadomain, but of cause you can change this to WAN or whatever. But keep an eye on the Proxy Server, Proxy and Repository Server will maybe need a bit more RAM then. Just keep an eye on it.
Best dedup on the DD is with Local 16TB setting.
rntguy
Enthusiast
Posts: 82
Liked: 1 time
Joined: Jan 29, 2016 8:31 pm
Full Name: Rnt Guy
Contact:

Re: Recommended settings for least used space and bandwidth

Post by rntguy »

Andreas Neufert wrote: Best dedup on the DD is with Local 16TB setting.
why is Local 16TB "BETTER" than WAN or any other choice if the cloud DD is configured in Veeam to uncompress upon arrival? Doesn't that put it in the RAW format allowing it to compress and dedupe with it's own metrics? Doesn't that setting make all of them have the same performance?

Or is that LOCAL 16TB best if it's backing up to a local DD where this feature isn't available?
foggy wrote:.
foggy, do you have any insight?
Andreas Neufert
VP, Product Management
Posts: 7081
Liked: 1511 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Recommended settings for least used space and bandwidth

Post by Andreas Neufert »

The only thing this setting effects it the block size that Veeam uses to write.
Local 16TB => 4MB
Local => 1MB
LAN=>512KB
WAN=>256KB
If you do not use Veeam WAN accelerators (which use max. Compression and variable Block Length deduplication regardless of the seeting for transport), you can achive faster transport with smaller block sizes at high latency links.
But if you compare the 4MB to 256KB, you have to handle 16x more metadata and this is something that can reduce speed at datadomain write.
So in tests at datadomain the best performance and datareduction was achived by the 4MB block size.

Storage optimization (job option):
Setting the storage optimization to Local 16TB+ has been shown to improve the effectiveness of Data Domain’s deduplication. The larger this value is, the smaller the preparation phase will be for a backup task and less memory will be used to keep storage metadata in memory.
https://www.veeam.com/kb1956

Hint: The block size will be the same within the whole chain. If you change the block size, you need to run an active full to enable the change on the backup chain.
rntguy
Enthusiast
Posts: 82
Liked: 1 time
Joined: Jan 29, 2016 8:31 pm
Full Name: Rnt Guy
Contact:

Re: Recommended settings for least used space and bandwidth

Post by rntguy »

Thank you. So let me see if I get this:

we do NOT use wan acccelerators and we DO have high latency links. For fastest transport of data use WAN then, right? that will make the speed of the DD writes slower but if those are performing well, this is OK.

you said 4MB (LAN16TB) will also improve 'data reduction'. do you mean the amount of data sent over the WAN link or the amount of data ultimately stored on the DD?

I'm ultimately not concerned with the length of time the 'preparation phase' takes and how much memory it uses in our situation. I'm most concerned with how well the DD can dedupe and compress the data and which settings sends the least amount of data over the internet link on the copy job.

since the BCJ DD is set to uncompress upon arrival, won't all Veeam compression settings be decompressed resulting in the same DD dedupe rates? Some may take longer to dedupe but the size should be the same ultimately, no?
foggy
Veeam Software
Posts: 21139
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Recommended settings for least used space and bandwidth

Post by foggy »

rntguy wrote:we do NOT use wan acccelerators and we DO have high latency links. For fastest transport of data use WAN then, right? that will make the speed of the DD writes slower but if those are performing well, this is OK.
Correct, with smaller blocks size you will transfer less data, which is your major concern.
Post Reply

Who is online

Users browsing this forum: AdsBot [Google], Dima P., mattskalecki and 98 guests