Host-based backup of VMware vSphere VMs.
Post Reply
Bart1982
Novice
Posts: 5
Liked: never
Joined: Jul 10, 2016 7:28 pm
Full Name: Bart van de Mosselaar
Location: Netherlands
Contact:

Data archive strategy?"Backup Copy" job isn't working for us

Post by Bart1982 »

Dear forum members,

I'm working for a large firm with locations across the world. In my job role I'm responsible for a selected amount of sites with local VMWare installations. We've got a central veeam server and proxies in the remote locations. Backups happen to NAS devices in the locations and we replicate the most important servers back to our central datacenter. We are running Veeam 9 with standard licenses.

There is a requirement to store backups for a longer period. In the past we setup 4 jobs per locations
1. - daily backup job, 14 days retention
2. - daily replication job to the DC, 3 days retention
3. - weekly backup job, 14 weeks of retention
4. - monthly backup job, 12 months of retention

Lets assume that a full backup takes 1TB in size and each differential backup is 50gb

With our previous strategy we would require the following storage space on a NAS:
1. 1000GB + 700GB
2. -
3. 1000GB + 700GB
4. 1000GB + 600BG
Total: 5000GB = 5TB

We decided to switch strategy to a "backup copy" job on a secundairy NAS with the following settings:
1. Backup job with 7 restore points
2. 4 weeks
3. 3 months
4. 2 quarters
5. 1 year

Turns out each individual setting creates a full backup file. This generates the following amount of data:
1. 1000GB + 350GB
2. 4x 1000GB
3. 3x 1000GB
4. 2x 1000GB
5. 1x 1000GB
Total: 11.350 GB = 11TB

Some locations are quite small and generate a full backup file of 300GB, but we also have locations with 3.5TB for a full backup.

Is there a better way to store our backups without creating to so much backup space?

Thanks!
Shestakov
Veteran
Posts: 7328
Liked: 781 times
Joined: May 21, 2014 11:03 am
Full Name: Nikita Shestakov
Location: Prague
Contact:

Re: Data archive strategy?"Backup Copy" job isn't working fo

Post by Shestakov »

Hello Bart and welcome to the community!
You can save additional space if schedule backup copy GFS job to run say weekly on Sunday and monthly on the last Sunday, you will have less job runs and less backup files. Same backup will be marked as "weekly" and "monthly" etc. This way you can save 3-4 full backups i.e. 3-4TB.
By the way for calculations you can use handy tool restore point simulator.
Thanks!
Bart1982
Novice
Posts: 5
Liked: never
Joined: Jul 10, 2016 7:28 pm
Full Name: Bart van de Mosselaar
Location: Netherlands
Contact:

Re: Data archive strategy?"Backup Copy" job isn't working fo

Post by Bart1982 »

Thanks, I'll have a look at the tool and the mentioned changes to the schedules.

I understand that all of the GFS copys are full files due to being able to restore them easy and redundancy for bitrot, file loss/corruption etc. But keeping 11 full copies (or 8 with changed schedules) is a bit to much for me to cope with on our NAS devices. Wouldn't it be nicer if a backup copy job creates a 'central datastore' file which holds a full copy and then the differentials are stored and linked to that 'central datastore'. For redundancy reasons the file should exist twice (or even better 3 times) on the backup copy location.

This way you can create much more archives and save them for a longer term without having so many TB's in use.

In my situation it would mean:
3x central datastore = 3TB
4,3,2,1 setup like described before = 10 differential files, ofcourse these files will be a bit bigger then before, lets assume double in size = 100gb ... 10*100 GB = 1TB

So only 4TB is required for the archive. Or am I dreaming now? :)
Shestakov
Veteran
Posts: 7328
Liked: 781 times
Joined: May 21, 2014 11:03 am
Full Name: Nikita Shestakov
Location: Prague
Contact:

Re: Data archive strategy?"Backup Copy" job isn't working fo

Post by Shestakov »

Bart1982 wrote:I understand that all of the GFS copys are full files due to being able to restore them easy and redundancy for bitrot, file loss/corruption etc. But keeping 11 full copies (or 8 with changed schedules) is a bit to much for me to cope with on our NAS devices. Wouldn't it be nicer if a backup copy job creates a 'central datastore' file which holds a full copy and then the differentials are stored and linked to that 'central datastore'. For redundancy reasons the file should exist twice (or even better 3 times) on the backup copy location.
This way those will be not full backups but incrementals, same as you can do with basic backup job. Plus if we are talking about quarterly or monthly incrementals, there will be lots of changes over months, so size of increments will be much bigger than size of daily increments.
So 4 TB is a dream indeed :)
Post Reply

Who is online

Users browsing this forum: Asahi and 41 guests