- Posts: 14
- Liked: 2 times
- Joined: Feb 02, 2017 2:13 pm
- Full Name: JC
2. Inline dedupe
Yes, talking about inline dedupe means I am putting my primary backup on an all flash array. I'm not crazy, it worked out cheaper, trust me. My DR site will only have compression native (no inline dedupe).
Maximizing space savings is a concern however. While I'll be using forever incremental with synthetic fulls on ReFS, I will have easily deduplicated data as I have to use a backup copy job to get the GFS retention I want on primary storage since they aren't supported in backup jobs. So I'll have a duplicate of each server "seed" that has GFS retentions on my primary storage, which should be ripe for deduplication. I think I should treat the primary storage like I would a traditional dedupe appliance.
Here is my thinking: I think I should configure the jobs and primary repo(s)(backup and gfs) settings to:
Uncheck “Enable inline data deduplication”
Change the Compression Level to (off??)(or dedup friendly?)
Change Optimize for to “Local target (16TB+ backup files)”
Repo: Align backup file data blocks
Repo: Check decompress data blocks before storing
Backup copy jobs to enable GFS to my primary storage GFS repo:
Uncheck inline data deduplication
compression level (off??)(or dedup friendly?)
For my secondary storage (has inline compression, no inline dedup), ReFS, one repo
Backup copy settings:
Check enable inline data deduplication
Compression level (off??)
Repo: Do I check decompress data blocks or not worry about it?
Repo: Do not check Align backup file data blocks
Does anyone have suggestions for the above plan, especially the bolded ones I'm not sure about?
Will I be doing something dumb by turning off inline dedup on the backup jobs? I'm thinking it won't matter since the primary has inline dedupe, but since the backup copy job will be on a non-dedupe array, should i just go ahead and let inline dedup happen?
CPU, backup windows, memory, and honestly LAN traffic from primary to DR don't really matter in my environment. I'm trying to maximize space savings.
- Veeam Software
- Posts: 1712
- Liked: 231 times
- Joined: Nov 17, 2015 2:38 am
- Full Name: Joe Marton
- Location: Chicago, IL
For the copy jobs I'm not as sure. Without inline dedupe I'd suggest setting the compression setting in the copy job to optimal. The default setting of auto will use the compression setting of the source backup files which is no compression due to the "decompress data blocks" setting. However, your secondary repo has inline compression and I just don't know enough to tell you who will give you better compression: us with optimal (any higher setting will require more CPU for minimal additional savings) or your hardware compression. I'll let others chime in on this one.
Users browsing this forum: No registered users and 15 guests