Comprehensive data protection for all workloads
Post Reply
AcrisureRW
Novice
Posts: 8
Liked: 2 times
Joined: Mar 20, 2018 7:51 pm
Full Name: Ryan Walker
Contact:

Many to One Copy Job Repository

Post by AcrisureRW »

We have many, many many many ( 20+) off-premise 1-10 VM systems that we're leveraging Per-VM (and some socket) licensing at, backing up to a local JBOD/NFS NAS 2016 ReFS repository.

Our goal is to have ~30 days onsite, ~60-90 days off-site in our private cloud locations, and 60/90+ days in an public cloud Archive Tier (i.e. S3/glacier)

These systems vary in OS and application, so there's a lot of unique elements. As such, I'm trying to determine the best course for a repository in one of our Data Centers in which I can leverage the best cost to storage value.

Restore rate isn't as important, these are DR systems - our Data Center systems back up to Cisco S3260 systems to allow the better restore i/o - and the throughput / ingest is going to be more limited by our bandwidth than the system, I expect.

No WAN acceleration, so we've been using a Data Domain 2200 with ddboost, and it seems to do pretty well. Running out of space but gives some nice dedupe/compression for these jobs:

Used: 14.6 TiB
Pre-Compression: 87.4 TiB
Total Compression Factor (Reduction %): 6.0x(83.3)

However this is all post-process which makes the cleaning a nightmare. We run Pure arrays for our production environment, and Nimble for tier-2. Both have very good inline dedupe, but with no direct api integration with Veeam, we'd still be sending the full incremental through the pipe. However as we could run a 2019 server with ReFS & Dedupe in front of that, with their own insane dedupe on the SANs, I feel like that might be well worth it. The Synthetic Fulls would be much faster, and while not as important, the restore would as well.

Most of our ROBO locations have between 10-20Mbps upload, so this could be a deciding factor, going with a solution that offers some pre-transmit dedupe such as ddboost. From what I've read, WAN acceleration really doesn't help all that much for anything > 10Mbps upload.

Anyone have insight into these options? For me, the global dedupe is paramount, with such dissimilar jobs and locations.
HannesK
Product Manager
Posts: 14848
Liked: 3088 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Many to One Copy Job Repository

Post by HannesK »

Hi Ryan,
there is no problem to point 20 (or whatever number) you have backup copy jobs to one repository. There is only hardware & bandwidth limit. But no limit in the software.

Could you explain the DDboost part? I understood that you have a Cisco S3260 as backup copy job destination. So there is no DDboost for that.

If you want to save bandwidth, the Veeam WAN Accelerator will help you. You can find some best practices here. The 10:1 ratio is not a hard limit. With different time zones (less overlapping jobs) I have seen much higher values at customers.

I cannot really follow your ReFS & deduplication ideas. In general, Veeam is built for restore speed. That's one of the reasons why we don't do global dedupe. With / without synthetic fulls you will not see any significant speed difference. I would also question whether synthetic fulls on ReFS make sense. They point to the same block anyway. If you want to save space, you could chose higher compression level, but this costs a lot CPU power!
WAN acceleration really doesn't help all that much for anything > 10Mbps upload.
my customers told me values between 50-100 MBit where it did not help anymore. So 10-20 should be good. I recommend testing.

If I understood your environment correctly, the recommended design would be:
- primary backup jobs in the branch offices (30 days retention)
- backup copy job with WAN accelerator to your central S3260 (choose retention with GFS to leverage S3)
- S3260 must be a scale out repository to be able to offload long term retention to S3 (capacity tier with U4)

Best regards,
Hannes
Post Reply

Who is online

Users browsing this forum: Bing [Bot], c.guerin, kivanov, Stabz and 120 guests