Comprehensive data protection for all workloads
Post Reply
Jesh
Novice
Posts: 7
Liked: never
Joined: Jan 09, 2015 4:24 pm
Full Name: Greg T
Contact:

How would you approach this 3-2-1-1

Post by Jesh »

Hello,

I currently have all of these resources available to me, and am curious how you would approach this backup method.

DataDomain 3300 - 10gb (same room as servers)
Tape Drive with LTO6 - 10gb - (same room as servers)
HP MSA 2050 direct connect NAS - (same room with servers)
WASABI Cloud - (30 Mbps upload only...)
Secondary VMWare site off site, 30MBps upload from main site only
An older DD160 - (same room as servers)

We have the ability to move anything here out of the server room to a more secure location on site.

We have around 10TB of servers, with only around 3TB tagged as 'Critical' (we have 'Critical, High, Medium and Low' with LOW being servers we can rebuild quickly and are not required to be online).

Obviously we have the DD3300 as daily, with some criticals as every 6 hours, and tape as weekly, but I am at a loss if I should use the MSA 2050, or Replicas, or the cloud, or the dd160. We're using the DD160 for backup copies right now but I think that this may be a bad idea, and using the msa 2050 might be better for instant recovery.

Please let me know your thoughts.
foggy
Veeam Software
Posts: 21138
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: How would you approach this 3-2-1-1

Post by foggy » 1 person likes this post

Hi Greg, it is not that you should use all of your hardware, whatever it takes, but rather you need to define the threats you want to protect from and then make use of the existing resources to set up a backup infra. The classic approach is to have faster storage for short-term backups/operational restores (aka Instant Recovery and FLR), some slower (typically, dedupe) secondary storage for longer retention, and cloud/tape for offsite/offline backup. Replication is something perpendicular, if you want to have the ability of an immediate failover. So I'd re-think your usage of DD3300 as primary storage as a first step.
jorgedlcruz
Veeam Software
Posts: 1493
Liked: 655 times
Joined: Jul 17, 2015 6:54 pm
Full Name: Jorge de la Cruz
Contact:

Re: How would you approach this 3-2-1-1

Post by jorgedlcruz » 2 people like this post

Hello,
Welcome to the forums; what a great way to start here! This is a very interesting topic. There were some comments regarding dedupes as the main primary repo here.

If I were able to change some of that design within budget, I would have something like:
For your critical workloads
  • First Backup, daily, and every 6 hours. Would go to your favorite storage vendor that provides Storage and compute, like HPE Apollo, for example, or Dell EMC R740-DX2 servers if a Dell house. This will be your Performance Tier part of a SOBR, with a Capacity Tier in Wasabi. Both of these copies would be immutable, the on-prem with Veeam Hardened Linux Repository and the second on object repository with immutability. The performance tier here would give you the required speed when needed to recover or trigger IVMR for those critical workloads.
  • Immediate Backup Copy will go to the DD3300 onsite and would leverage Dell technology to replicate to DD160 on an alternate location. This way you have a copy offsite that Veeam is not aware of, done with Dell technology, adding another layer of security in case of attack as hackers would not see those points replicated by Dell.
  • Finally, the Tape Job could be deprecated if you are happy with the GFS going to Wasabi. But if kept, the primary site is okay as long as you take the tapes out. Etc.
For less critical workloads, meaning daily backups and not concerned about recovery speed
  • First backup to the Performance Tier, if not to the DD3300.
  • Backup Copy to the MSA2050 on the second site, hopefully, presented to something you can format as XFS for immutability
  • Additional Dell Copy to the DD160 outside of the Veeam realm
  • Additional tape if required
These is just some ideas; if you have a trusted local Veeam Partner, I would suggest you engage with them, so they can help you design a more robust approach. These are just some thoughts and experiences from the field.

Hope it helps.
Jorge de la Cruz
Senior Product Manager | Veeam ONE @ Veeam Software

@jorgedlcruz
https://www.jorgedelacruz.es / https://jorgedelacruz.uk
vExpert 2014-2024 / InfluxAce / Grafana Champion
Jesh
Novice
Posts: 7
Liked: never
Joined: Jan 09, 2015 4:24 pm
Full Name: Greg T
Contact:

Re: How would you approach this 3-2-1-1

Post by Jesh »

Thank you I forgot to mention we have all production servers running on a Nimble.

So this brings the MSA 2050 back into play, could it be used instead of an HPE Apollo?
jorgedlcruz
Veeam Software
Posts: 1493
Liked: 655 times
Joined: Jul 17, 2015 6:54 pm
Full Name: Jorge de la Cruz
Contact:

Re: How would you approach this 3-2-1-1

Post by jorgedlcruz »

Hello,
You will need to connect the MSA 2050 to some hardware using iSCSI to format it as XFS using Linux. I am not sure about the read speed for the MSA 2050; I am sure it will be somewhat inferior to an Apollo, but yes. For Performance Tier, if the read speed is better than DD, which your business requires, it could be a good solution.

I would work with a Partner, even with HPE, if you are more an HPE house.

Let us know what you end up deciding, have a great weekend.
Jorge de la Cruz
Senior Product Manager | Veeam ONE @ Veeam Software

@jorgedlcruz
https://www.jorgedelacruz.es / https://jorgedelacruz.uk
vExpert 2014-2024 / InfluxAce / Grafana Champion
Jesh
Novice
Posts: 7
Liked: never
Joined: Jan 09, 2015 4:24 pm
Full Name: Greg T
Contact:

Re: How would you approach this 3-2-1-1

Post by Jesh »

Thanks we have the MSA connected to 3 VMware hosts right now and Nimble.

So our backup as is right now goes:

Daily backup to DD3300, immediate backup copy to DD160 (old system), replica of critical servers to offsite every 8 hours, backup of critical to wasabi every 3 days, and weekly to tape. This obviously isn't a good idea.

I'll need to give this some thought on what to do with the other hardware since our bandwidth is HORRIBLE, making off site a pain in the ass. That is why we are only using it for Critical servers right now... this is a lot tougher than it seems.
soncscy
Veteran
Posts: 643
Liked: 312 times
Joined: Aug 04, 2019 2:57 pm
Full Name: Harvey
Contact:

Re: How would you approach this 3-2-1-1

Post by soncscy »

Does the secondary site have a different uplink speed or it's also 30 Mbit?

One item that I see missing is off-site copies, and while I know the uplink is slow, what about seeding the secondary site (backup copies + replicas) and then if bandwidth there allows, offloading further?

What is the average total transferred on your backup copies for incremental? If it's < 500 GiB, you likely could establish offsite copies with backup copy seeding, and same with replica for the most critical servers.

The pain point in this model of course is if anything happens to the backup copy/replica, it means a drive to the second site to deliver the seed, but it should be feasible. I wouldn't even bother trying to use the DD160 frankly speaking, as based on the specs from Dell it looks like restores would be such a pain from this even as just emergency restores. Maybe as a supplement on the main site for non-critical servers.

The MSA on the second site I think would be ideal, either attached to a Linux/Windows machine to serve as a repository or with a gateway server to use with NFS/SMB.
LazyLemur
Lurker
Posts: 2
Liked: never
Joined: Nov 29, 2021 7:46 pm
Full Name: Mike Hatcher
Contact:

Re: How would you approach this 3-2-1-1

Post by LazyLemur »

How would you configure the SOBR with a primary backup to onsite (performance), secondary target (backup copy) to onsite and and capacity to the cloud?
I didn't think it worked like that. If it can, I will be next in line.
somoa
Influencer
Posts: 22
Liked: 3 times
Joined: Aug 10, 2020 5:57 am

Re: How would you approach this 3-2-1-1

Post by somoa »

You should be able to easy offload 10GB a day with 30mbps upload. Mine are offloading around 5GB an hour on similar connection.
The first offload of the full backup can take some time but once it is only doing incremental it is much better.

I also have mine speed limited during working hours to only 5mbps. This helps a lot for the initial seeding.

I have all my servers backup to SOBR(DD 3300). This offload ASAP to cloud storage(WASABI) and a backup copy job to another onsite local storage(HP MSA).
From the SOBR I have replicas to another compute and storage cluster(VMWare site off site) if I need to recover or can IVMR from the SOBR.

Tape also comes along at end of week, month and year to grab a copy.

I would put the 2nd storage device in another room or building if you have the option. Even better if you can move it to the same location as the other VMware site.
Jesh
Novice
Posts: 7
Liked: never
Joined: Jan 09, 2015 4:24 pm
Full Name: Greg T
Contact:

Re: How would you approach this 3-2-1-1

Post by Jesh »

We have no problem about the offloading it just seems like maybe I have the timing all messed up. How often do people do these types of jobs? Maybe I am doing them too often as daily?

I know this is a tough one, but what is a baseline best practice of how often to do backups for production servers to different media? (e.g. not the 'main' backup, but backup copies, replicas, and tape and/or cloud?)

Update I did set up an Immutable Repo with Linux XFS and have that set up. We are testing speeds on this.
Mildur
Product Manager
Posts: 9838
Liked: 2602 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: How would you approach this 3-2-1-1

Post by Mildur »

Hi Jesh
I know this is a tough one, but what is a baseline best practice of how often to do backups for production servers to different media? (e.g. not the 'main' backup, but backup copies, replicas, and tape and/or cloud?)
In my opinion, It depends on your business policy.
A different media means normally, you want to transport your backups to another location, in case something happens in your production datacenter.
Each Company has to think about their RPO times. Recovery Points Objectives defines, how much data loss your company can tolerate. If you have an RPO of 4 hours, you must make sure that you have a backup and/or replica each 4 hours of your data. In Case something happens to your production site, you should also have a copy at another location to be able to restore without loosing more data than 4 hours.
I would always do immediate copies for cloud and backup copy jobs, if my network and hardware allows it.
Your replica interval is defined by your RPO policy.

Best
Fabian
Product Management Analyst @ Veeam Software
Jesh
Novice
Posts: 7
Liked: never
Joined: Jan 09, 2015 4:24 pm
Full Name: Greg T
Contact:

Re: How would you approach this 3-2-1-1

Post by Jesh »

Ok slowly approaching this.

We have decided to get a DL380 with 2.4 TB Drives but they are only 10k RPM, 8 in RAID 6. This would be our performance tier. Is this going to be fast enough?
jorgedlcruz
Veeam Software
Posts: 1493
Liked: 655 times
Joined: Jul 17, 2015 6:54 pm
Full Name: Jorge de la Cruz
Contact:

Re: How would you approach this 3-2-1-1

Post by jorgedlcruz »

Hello Greg,
There are a few tools to give you the theoretical speed you will get; please remember that if you use it for Performance Tier, that means you will have Capacity Tier somewhere; I think it was Wasabi. So we will do backups to this Performance Tier, would most likely handle it pretty well, and then slice those backups in blocks to upload to Wasabi which is read intensive, and I guess RAID6 with 10K rpm will be somehow slow; not to worry as you said that your upload link is quite slow anyways to the Internet right?

Overall, I would say this is an a-ok approach, but once again, please validate these numbers and architecture with your local Partner, and local Veeam SE, etc.
Jorge de la Cruz
Senior Product Manager | Veeam ONE @ Veeam Software

@jorgedlcruz
https://www.jorgedelacruz.es / https://jorgedelacruz.uk
vExpert 2014-2024 / InfluxAce / Grafana Champion
Jesh
Novice
Posts: 7
Liked: never
Joined: Jan 09, 2015 4:24 pm
Full Name: Greg T
Contact:

Re: How would you approach this 3-2-1-1

Post by Jesh »

Yes we have slow upload but we plan on upgrading to 300MB Up in the next two months. What kind of drives and RAID would you use in an HPE DL380 Gen10?
Jesh
Novice
Posts: 7
Liked: never
Joined: Jan 09, 2015 4:24 pm
Full Name: Greg T
Contact:

Re: How would you approach this 3-2-1-1

Post by Jesh »

jorgedlcruz wrote: Aug 19, 2022 2:45 pm
[*]Immediate Backup Copy will go to the DD3300 onsite and would leverage Dell technology to replicate to DD160 on an alternate location. This way you have a copy offsite that Veeam is not aware of, done with Dell technology, adding another layer of security in case of attack as hackers would not see those points replicated by Dell.
[*]Finally, the Tape Job could be deprecated if you are happy with the GFS going to Wasabi. But if kept, the primary site is okay as long as you take the tapes out. Etc. [/list]

For less critical workloads, meaning daily backups and not concerned about recovery speed
  • First backup to the Performance Tier, if not to the DD3300.
  • Backup Copy to the MSA2050 on the second site, hopefully, presented to something you can format as XFS for immutability
So we have a DL380 w/ linux and it's a hardened repository. We are trying to use the DD3300 for backup copy jobs, but instead of 800-1000MB/s we are now only getting 300-500MB/s.

Does anyone know if there are some configuration changes that need to be done to the DD3300 and or to the Backup Copy Jobs?
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Spex and 168 guests