Comprehensive data protection for all workloads
Post Reply
jimmycartrette
Influencer
Posts: 14
Liked: 2 times
Joined: Feb 02, 2017 2:13 pm
Full Name: JC
Contact:

Recommended settings for ReFS and Nimble

Post by jimmycartrette »

I know there is currently guidance on Nimble storage as a Veeam repository, but doesn't seem to address two particular factors:
1. ReFS
2. Inline dedupe

Yes, talking about inline dedupe means I am putting my primary backup on an all flash array. I'm not crazy, it worked out cheaper, trust me. My DR site will only have compression native (no inline dedupe).

Maximizing space savings is a concern however. While I'll be using forever incremental with synthetic fulls on ReFS, I will have easily deduplicated data as I have to use a backup copy job to get the GFS retention I want on primary storage since they aren't supported in backup jobs. So I'll have a duplicate of each server "seed" that has GFS retentions on my primary storage, which should be ripe for deduplication. I think I should treat the primary storage like I would a traditional dedupe appliance.

Here is my thinking: I think I should configure the jobs and primary repo(s)(backup and gfs) settings to:
Uncheck “Enable inline data deduplication”
Change the Compression Level to (off??)(or dedup friendly?)
Change Optimize for to “Local target (16TB+ backup files)”
Repo: Align backup file data blocks
Repo: Check decompress data blocks before storing

Backup copy jobs to enable GFS to my primary storage GFS repo:
Uncheck inline data deduplication
compression level (off??)(or dedup friendly?)

For my secondary storage (has inline compression, no inline dedup), ReFS, one repo
Backup copy settings:
Check enable inline data deduplication
Compression level (off??)
Repo: Do I check decompress data blocks or not worry about it?

Repo: Do not check Align backup file data blocks


Does anyone have suggestions for the above plan, especially the bolded ones I'm not sure about?
Will I be doing something dumb by turning off inline dedup on the backup jobs? I'm thinking it won't matter since the primary has inline dedupe, but since the backup copy job will be on a non-dedupe array, should i just go ahead and let inline dedup happen?

CPU, backup windows, memory, and honestly LAN traffic from primary to DR don't really matter in my environment. I'm trying to maximize space savings.
jmmarton
Veeam Software
Posts: 2097
Liked: 310 times
Joined: Nov 17, 2015 2:38 am
Full Name: Joe Marton
Location: Chicago, IL
Contact:

Re: Recommended settings for ReFS and Nimble

Post by jmmarton » 1 person likes this post

For the initial backup job, I'd leave inline dedupe on. This will reduce the amount of data sent to the repository, then its inline dedupe will reduce data further using global dedupe. This config may skew the stats on the hardware to show a smaller dedupe ratio, but you'd have the benefit of sending less data to it. As to compression, I'd also leave that as the default which is optimal compression. The repository setting of "decompress data blocks" is all you need. Again the defaults do a good job to limit data across the wire, then repository settings handle the rest.

For the copy jobs I'm not as sure. Without inline dedupe I'd suggest setting the compression setting in the copy job to optimal. The default setting of auto will use the compression setting of the source backup files which is no compression due to the "decompress data blocks" setting. However, your secondary repo has inline compression and I just don't know enough to tell you who will give you better compression: us with optimal (any higher setting will require more CPU for minimal additional savings) or your hardware compression. I'll let others chime in on this one.

Joe
Post Reply

Who is online

Users browsing this forum: mbrzezinski and 254 guests