Comprehensive data protection for all workloads
Post Reply
dhayes16
Service Provider
Posts: 171
Liked: 19 times
Joined: Feb 12, 2019 2:31 pm
Full Name: Dave Hayes
Contact:

Best Practice in this scenario help

Post by dhayes16 »

Hello All:
We have recently switched over from another backup solution with cloud storage and we are very happy with Veeam. We almost always deploy a windows server based BDR onsite and have been deploying B&R on those boxes and it has worked nicely. However, the other solution would have long term storage (GFS) locally on prem and the offsite storage would be for DR purposes only in the event of a nuked data center. So basically in the cloud we would have about 7 retention points and on prem would be long term storage and the ability to recover quickly if needed. I believe this goes against Veeam best practices but I am trying to replicate it. Basically, what we did for now create a normal primary backup job with 10 retention points. We then have a backup copy job to a cloud connect provider with the 7 retention points so we can get that data offsite. We then have another backup copy job that does a GFS backup to another repository on the same local disk but in a different folder. Now the obvious issue here is the waste of disk space since we are duplicating 7 days of backups on the local disk.

There is another thread going about GFS getting into the primary backup job. But until that arrives is this the only way to accomplish our goal? Also, are there specific settings I should consider regarding synthetic vs active fulls for each one of these jobs?

Thanks for any feedback.
Dave
HannesK
Product Manager
Posts: 14287
Liked: 2877 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Best Practice in this scenario help

Post by HannesK »

Hello,
you already found one workaround. I have also seen customers having two backup jobs: one daily and one weekly / monthly.

Synthetic vs. active full is usually a decision between performance / load on production or backup storage. There is no "right" or "wrong" for that.

Best regards,
Hannes
BartP
Veeam Software
Posts: 230
Liked: 62 times
Joined: Aug 31, 2015 8:24 am
Full Name: Bart Pellegrino
Location: Netherlands
Contact:

Re: Best Practice in this scenario help

Post by BartP »

Because you ask for a Best Practice:
The solution you have at this moment is aimed at restoring as fast as possible, all files/VM. Except when there is a DR, only then you would restore from the secondary backup repository. Should the primary site burn down, all you have are 7 most recent Restore Points, all previous data is gone (yearly, monthly, etc).

The thing is, that you (most probably) won't even boot a full VM from a Monthly GFS backup unless there is no other option. Most often a GFS backup is used to restore files/folders/etc. With that in mind, do you have a use case where it's absolutely necessary to have your GFS backups on-site?

For many companies, it's not only about being able to restore, but also staying compliant with external regulations. Now I do not know -if- you would still be compliant, but I'd rather mention it (in abundance) then don't.
Backup 10 days on-site and have 7 retention point in the Cloud + GFS off-site is a proper solution.
This aligns with the Best Practice; always able to quickly restore to a recently created restore point. all data off-site for DR and no matter what happens - You have your data. Imagine you recovered from the DR and then someone misses a file which suddenly is "Critically" to close a deal or something? It's not there anymore and you're not able to recover it anymore.
Now, that person who deleted a file and it took a month+ for them to discover it? Then I'm 100% they're not having a Prio 1 or 2 problem here, but a low priority and there is 0 need for them to have that file on fast/expensive backup storage. It's not the world's end if that restore takes 30m before they have their data back. For clarity: Restoring over a 100Mb WAN, using only 50% bandwidth, still means you restore a 9GiB file in 30 mins.
Bart Pellegrino,
Technical Account Manager - EMEA
dhayes16
Service Provider
Posts: 171
Liked: 19 times
Joined: Feb 12, 2019 2:31 pm
Full Name: Dave Hayes
Contact:

Re: Best Practice in this scenario help

Post by dhayes16 »

HannesK wrote: Feb 20, 2019 9:23 am you already found one workaround. I have also seen customers having two backup jobs: one daily and one weekly / monthly.
Thanks very much for that. We will scope that out and see what works best in this scenario. I appreciate it.
dhayes16
Service Provider
Posts: 171
Liked: 19 times
Joined: Feb 12, 2019 2:31 pm
Full Name: Dave Hayes
Contact:

Re: Best Practice in this scenario help

Post by dhayes16 »

BartP wrote: Feb 20, 2019 10:50 am
The thing is, that you (most probably) won't even boot a full VM from a Monthly GFS backup unless there is no other option. Most often a GFS backup is used to restore files/folders/etc. With that in mind, do you have a use case where it's absolutely necessary to have your GFS backups on-site?
Thanks for this detailed response. Yes it does seem like we are going against the prevailing current here with Veeams design philosophy. The main reason I suppose is that as an MSP we are selling a DR solution where we can recover in the event of a nuked location. So the customer is more than happy to have an operational infrastructure from DR even if they lose all their gfs infrastructure. They know this is the way it works from the get go. It primarily has to do with the offsite storage costs that a cloud based gfs solution would require. We do bring it up to all customers pre deployment but they are more concerned (as short sighted as it might sound) on just being operational in the event of a disaster. We do offer an additional tier option to allow them to have more storage retention if needed but most elect not to since very few of them have compliance reasons.

I guess I was hoping that Veeam had a little more flexibility in this regard to be able to have local gfs onsite without redundant data (primary and backup jobs) on the local storage. I believe there has been discussions regarding gfs on the primary job in another thread and it would be super useful.

We are going to look at possibly using the U4 capacity tier option with azure or S3 as well but that is just another layer to manage.

I do understand your points about gfs being gone if the site is nuked but it is more of an economic reason/tradeoff from the customer perspective.

I really appreciate the reply
Dave
BartP
Veeam Software
Posts: 230
Liked: 62 times
Joined: Aug 31, 2015 8:24 am
Full Name: Bart Pellegrino
Location: Netherlands
Contact:

Re: Best Practice in this scenario help

Post by BartP »

No problem, that's why we're here.
Should you really want to see this in a future release, it's always possible to put down a feature request.
I can imagine you are not alone with this :)
Bart Pellegrino,
Technical Account Manager - EMEA
dhayes16
Service Provider
Posts: 171
Liked: 19 times
Joined: Feb 12, 2019 2:31 pm
Full Name: Dave Hayes
Contact:

Re: Best Practice in this scenario help

Post by dhayes16 »

Thanks...I believe what we are needing is being discussed below.

veeam-backup-replication-f2/feature-req ... 25743.html
HannesK
Product Manager
Posts: 14287
Liked: 2877 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Best Practice in this scenario help

Post by HannesK »

yep, that discussion is the right one and it mentions that we are already working on this :-)
Post Reply

Who is online

Users browsing this forum: ante_704 and 229 guests