Comprehensive data protection for all workloads
Post Reply
toddr279
Lurker
Posts: 1
Liked: never
Joined: Jul 12, 2018 12:59 pm
Contact:

Request feedback w/changes, looking to re-architect backup with VxRail, Data Domain Appliance and Cloud Provider

Post by toddr279 »

I'm looking to re-architect our backup solution. I was hoping someone might have time to look over what I'm attempting to do and see if there are any drawbacks or items I should consider. I do have a few questions. I appreciate any feedback.

Past
--------------------
When we first implemented Veeam, all our backups exist remotely. They were initially backed to a Private Cloud Azure Windows Server with 10TB storage. Then a copy of the backup was sent to another one of their facilities at another remote location via Cloud Connect. For the most part, it has worked well for the past few years. The main issue has been maxing our MPLS connection to our private cloud provider, which increases the backup time. Our other issue is the increase in storage needs both locally and for backups (primary and copy locations).

Changes
--------------------
Since then, we have upgraded our systems to a VxRail platform that has more storage. We also included a local Data Data Domain Appliance, and are looking to bring our backups to our local data center, but still send a copy to our Cloud Connect Partner.

New Direction
--------------------
I have built up a local Windows Server to host Veam Backup and Recovery 11. Attach the Data Domain Appliance as a Backup Repository, along with our Cloud Connect Provider. The main backup would be housed on our Data Domain Appliance (not the local server), then a Backup Copy Job would send it to our Cloud Connect Provider.

In the past, we have kept our "hot backups" on the server, the cold backups were sent to our Cloud Connect Provider. The Hot Backups only kept 3-5 days worth of backups due to space constraints. The "cold backups" were planned for 3-year retention.

Now that we have the Data Domain Appliance, I would like to keep the long-term copies on the Data Domain Appliance and reduce the storage and retention of the data that is hosted in the cloud repository.

I have added the Data Domain as a Dedup Appliance, and am currently following the Best Practices Guide with reference to 1) disable inline dedupe and 2) use local target (large blocks) in the Backup Job options.

My Questions
--------------------
Thinking about retention and knowing the Data Domain will house the "hot copy":
On my initial backup job, should I use the "Keep Certain full backups longer for archival purposes" (4 Weekly, 12 Monthly, 3 Yearly) or would it be better to create a second repository on the Data Domain and use that second repo as a Backup Copy with longer retention? - I'm thinking of one Primary Backup Job with GFS Archival and one Backup Copy.

If I use the local backup server, I can ditch the Private Cloud Backup Server along with the cost. Using the Data Domain as the primary, the 10TB disk on the new local backup server is overkill and I can probably shrink that. Should I consider keeping it for Local Replicas?



Infrastructure
Onsite Dell VxRail System
Onsite Dell Data Domain

Backup Repositories
Onsite - Veeam Server (10TB)
Onsite - Dell Data Domain (30TB)
Private Cloud - Windows Azure Server (6TB)
Private Cloud - Cloud Connect Repo (28TB)

Backup Jobs
-------------------
RPO-12 - RTO-01 - Critical Infrastructure [Replicated]
RPO-12 - RTO-06 - High Priority Systems [Replicated]
RPO-12 - RTO-12 - High Priority Systems
RPO-12 - RTO-12 - AppGroup-010 Systems
RPO-24 - RTO-12 - AppGroup-020 Systems
RPO-24 - RTO-24 - AppGroup-030 Systems
RPO-24 - RTO-24 - AppGroup-031 Systems
RPO-24 - RTO-24 - Medium Priority Systems
RPO-24 - RTO-99 - Low Priority Systems

Backup Copy Jobs
-------------------
All Critical, High, App Group and Medium systems have Backup Copy Jobs

Replication Jobs
-------------------
Critical and High Priority [Replicated]
PetrM
Veeam Software
Posts: 3264
Liked: 528 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: Request feedback w/changes, looking to re-architect backup with VxRail, Data Domain Appliance and Cloud Provider

Post by PetrM »

Hello,
toddr279 wrote: or would it be better to create a second repository on the Data Domain and use that second repo as a Backup Copy with longer retention?
I don't see any reason to keep redundant copies of the same backup within the same storage. According to the 3-2-1 rule, the copies of data must reside on two different medias. Therefore, a single backup job with GFS enabled should fully meet the needs.

toddr279 wrote: Should I consider keeping it for Local Replicas?
Not sure I understand what replicas are you talking about? From my point of view, it's enough to have a backup job with GFS which is pointed to the local Data Domain and a backup copy which uploads data to the cloud provider.

Thanks!
Mildur
Product Manager
Posts: 8735
Liked: 2294 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Request feedback w/changes, looking to re-architect backup with VxRail, Data Domain Appliance and Cloud Provider

Post by Mildur »

VM replicas could be needed for the low RTO Policy.
With the data Domain, I don‘t see low RTO like 1 hour for bigger vms. It depends on the data domain modell, of course. I have never tested instant vm recovery with veeam from a datadomain storage, but I can see some Performance issues with that.

If toddr is using a datadomain as a primary backup repo, the replicas could be really helpful in a DR scenario.
Product Management Analyst @ Veeam Software
PetrM
Veeam Software
Posts: 3264
Liked: 528 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: Request feedback w/changes, looking to re-architect backup with VxRail, Data Domain Appliance and Cloud Provider

Post by PetrM »

I wouldn't opt for a dedupe appliance to store VM replicas as lower read performance is expected with deduplicating storage systems. Although, I don't think that the analogy with Instant Recovery is 100 % correct because in this case data read is performed from compressed backup file and it can give additional processing rate losses.

Thanks!
Mildur
Product Manager
Posts: 8735
Liked: 2294 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Request feedback w/changes, looking to re-architect backup with VxRail, Data Domain Appliance and Cloud Provider

Post by Mildur » 1 person likes this post

I am really sorry, PetrM.
I thought, that he was reffereing to vm replicas on a other host.
After reading the text again, it‘s clear to me, that backup copy jobs to his 10TB Windows Server was meant with replica, not a veeam replica job.
If I use the local backup server, I can ditch the Private Cloud Backup Server along with the cost. Using the Data Domain as the primary, the 10TB disk on the new local backup server is overkill and I can probably shrink that. Should I consider keeping it for Local Replicas?
My recommendation:

Backup Job to the local backup server 10TB Disk. (Short Term Retention)
Backup Copy Job to the datadomain. (Long Term Retention)

You will get the best speed if you need a restore.
Product Management Analyst @ Veeam Software
Post Reply

Who is online

Users browsing this forum: Amazon [Bot] and 126 guests