Comprehensive data protection for all workloads
Post Reply
sorell.it
Novice
Posts: 4
Liked: 3 times
Joined: Oct 24, 2016 10:22 pm
Full Name: Sorell IT
Contact:

Best Practice

Post by sorell.it » 1 person likes this post

Hi,

We are backing up to a offsite repo over the internet. We did have it configured to do GFS but this takes a lot of disk space and is apparently old school thinking. I have been reading up on using a seed and creating a job that's mapped to the seed and runs every 30 days and keeps the last 12 restore points or runs every day and keeps the last 30 to roughly fall in line with the GFS method. Mapped backups seems to be a 1 to 1 relationship as it says "this job is currently mapped to job x, are you sure you want to reassign it?" when i try to configure the 2nd job.

Is it possible to have multiple jobs running from a single seed without 2 copies of the seed?

Should I just keep the last 365 restore points or does this cause issues as well? It seems excessive and there would be little difference to the business if a restored backup is 235 vs 230 days old.

Are there better ways to manage 12 months worth of backups on limited disk space?

Thanks.
PTide
Product Manager
Posts: 6405
Liked: 720 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Best Practice

Post by PTide »

Hi,
We are backing up to a offsite repo over the internet. We did have it configured to do GFS but this takes a lot of disk space and is apparently old school thinking.
It's not quite clear if you use Backup Copy job with GFS retention to ship the backups to the offsite storage, or do you use just plain backup job and maintain GFS by other means?

Thanks
sg_sc
Enthusiast
Posts: 61
Liked: 8 times
Joined: Mar 29, 2016 4:22 pm
Full Name: sg_sc
Contact:

Re: Best Practice

Post by sg_sc »

It seems to me you need ReFS 64k with block cloning and BackupCopyJobs with GFS.
taurus1978
Technology Partner
Posts: 20
Liked: 2 times
Joined: May 11, 2015 11:51 am
Full Name: Patrick Huber
Contact:

Re: Best Practice

Post by taurus1978 »

Hello,

some tips for offsite backup.

- Configure primary Jobs to backup to local storage.
- Create a seed of the primary Job. Transfer it to the remote location
- Then configure BC Job to use GFS Retention and send it to remote Repositories. Map it to the seed. "Map backup...."
- Use dedicated Proxies on each site, with local Storage for them.
- For primary Job and BC Job set Storage Optimization to LAN or WAN Target. Depends on you Line capacities. This sets the blocksize for the backupfiles.
- For very poor wan lines use WAN accelerator feature.

How many GFS points you define depends on your recovery policies. F.e.: If you want to recover the data for about 12 Months on a weekly basis you need to configure 52 weekly GFS Points. Or for one year on a monthly basis you need to keep 12 Monthly GFS Points.

Hope this helps you a little bit.

Regards,
VEEAM Enthusiast
Veeam certified Architect
Post Reply

Who is online

Users browsing this forum: Mildur and 119 guests