Comprehensive data protection for all workloads
Post Reply
TomPioreck
Influencer
Posts: 16
Liked: never
Joined: Jul 11, 2014 6:18 pm
Full Name: Tom Pioreck
Contact:

revamping backups - Best Practices

Post by TomPioreck »

I'd like to start the new year with a newly created backup topology. We currently only use backup copy jobs for a few of our servers, basically our most critical file and application servers. We have three office locations and use replications and backup copy jobs to keep available versions in our other offices, instead of having to partner with a third-party host. I would like to configure our environment in the most efficient methodology as possible. I've noticed over the last year of using backup copy jobs, as well as using them to maintain our backup retention for archival purposes, that the disc space used grows quite large, quite fast. We did recently upgrade our environment to version 8 and run NetApp SANs for our storage environment. All servers are VMware, save one domain controller. I'm hoping to get some guidance for the following;

- We currently run separate jobs for each server, including all file servers separately. Backup copy jobs are done to mirror these servers. Is there a benefit to running the backup, and/or backup copy, jobs to have multiple servers contained within? The servers all backup on the same schedule. Would we gain anything in terms of compression rate?

- I've set our backup copy retention to allow for seven daily, four weekly, 12 monthly, seven yearly.
- If our terms are that the daily backups are deleted at the end of their respective weeks, are they needed to exist in order for the weekly backup job to properly run? I have the same question when it comes to maintaining all of the weekly jobs at the end of the month when the monthly job runs. Does the next level job rely on its "child"? More directly, will the weekly job properly run as wanted if the weekly jobs no longer exist in the environment? What about the prior week? Is it true for the monthly and yearly jobs as well?

- What are the best practice recommendations for non-impactful servers that have jobs run regularly, but don't really require daily backups to be available? We'd only look to maintain small weekly and monthly backups, rolled up for our yearly retention.

- Best recommendations when it comes to Exchange 2010 and its regularly scheduled backup? We don't use Exchange as our official email retention system.

I'd like to express my thanks in advance. I know this is a rather lengthy post. I've used the manual and documentation available for setting up the drives, I just feel that there is something that I'm missing from a logic standpoint and would like to work to resolve those items. The major spike in disc usage when we started implementing the backup copy jobs has me wary. I can't just up our storage capacity at this point, so I need to have all of the figures in front of me when I re-design our backup solution.

Thanks again.
meilicke
Influencer
Posts: 22
Liked: 4 times
Joined: Sep 02, 2014 2:51 pm
Full Name: Scott Meilicke
Contact:

Re: revamping backups - Best Practices

Post by meilicke »

I believe you will get better space utilization with a single job because dedupe is all within a single job. Multiple jobs will not dedupe across them.

Do you have NetApp at your local and remote sites? If so, have you tried using snap mirror/copy? If so, you may get the best dedupe from that setup. I haven't used NetApp, so maybe I'm completely wrong! :)

Scott
TomPioreck
Influencer
Posts: 16
Liked: never
Joined: Jul 11, 2014 6:18 pm
Full Name: Tom Pioreck
Contact:

Re: revamping backups - Best Practices

Post by TomPioreck »

I haven't tried anything with snap mirror/copy, though I'm not the enterprise storage guy, so I'm not real strong on its abilities and current runs. We've been running all of our process thorugh Veeam for backups. We do have NetApp boxes in all of our office sites.
veremin
Product Manager
Posts: 20270
Liked: 2252 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: revamping backups - Best Practices

Post by veremin »

Would we gain anything in terms of compression rate?
Having multiple VMs inside single job will guarantee better deduplication rates. It applies to both backup and backup copy job.
Does the next level job rely on its "child"?
No, monthly GFS doesn't rely on weekly one, quarterly on monthly and yearly on quarterly. Kind, get familiar with the corresponding section of User Guide; should clarify the situation for you.
What are the best practice recommendations for non-impactful servers that have jobs run regularly, but don't really require daily backups to be available? We'd only look to maintain small weekly and monthly backups, rolled up for our yearly retention.
Are you talking about backup copy jobs? If so, just specify 7-day copy interval, and enable monthly and yearly GFS for it.
Best recommendations when it comes to Exchange 2010 and its regularly scheduled backup.
It solely depends on your RPO/RTO requirements dictated by your company policy.

Thanks.
TomPioreck
Influencer
Posts: 16
Liked: never
Joined: Jul 11, 2014 6:18 pm
Full Name: Tom Pioreck
Contact:

Re: revamping backups - Best Practices

Post by TomPioreck »

So time, bandwidth, man-power, and growth have brought me full circle to this question again. We're looking to consider a greenfield approach to totally revamp our backup infrastructure. Multiple offices, inter-connected through an MPLS or SSL-VPN, each with their own local storage for on-site file restores, with nightly replication between offices for our critical systems. We're looking to add iLand as our cloud backup and use that for our official DR and retention, which has me looking to determine how much space we're going to need in our contract.

The goal is to approach our backups as if we were taking over for the first time, considering what we would want to do in an ideal situation. The understanding that the company is expecting to double in size within five years; so the consideration for scaling up as the company does is a major consideration as we go forward. Consideration is being taken for the types of files, their cycle of change and use on a daily/weekly basis, how they measure up on an internal mission-critical scale, separation of data types among the disks on the servers, and then how we want to consider our retention policy going forward. We have certain requirements for file retention within the live system for the industry, but the length of backup retention is not clearly defined, nor dictated.

Here are my questions that I'm considering;
- If we have servers configured that only have one disk and store multiple types of data, wouldn't it be prudent to consider re-designing those servers to properly segment the data to avoid a scope creep when creating our backup planning and structure?

- Do I get better use of my storage space by using multiple "end-of-day" retention points in my regular jobs, as opposed to using the GFS retention points and weekly archive points within a backup copy job structure?

- Am I the only one that feels when considering the infrastructure, and looking to alter what's already in place to a clean, new solution, that it feels like I wind up going in circles, with so many different points of data to review, that one solution and idea, completely counteracts the one I developed and the progress I had already made?

- Is there any logical reason to use my cloud storage for the smaller data size, just as a cost consideration, and my local storage for a split environment, just because it could possible better return on investment in the immediate future, but be negligible, if not reverse, as the enterprise continues to grow?

Thanks for any, and all, suggestions, comments, and any kind of insight that's offered.

Tom
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: revamping backups - Best Practices

Post by foggy »

TomPioreck wrote:- Do I get better use of my storage space by using multiple "end-of-day" retention points in my regular jobs, as opposed to using the GFS retention points and weekly archive points within a backup copy job structure?
This tool should help you to perform space estimations.
Post Reply

Who is online

Users browsing this forum: Google [Bot] and 229 guests