Host-based backup of VMware vSphere VMs.
Post Reply
egrigson
Novice
Posts: 6
Liked: never
Joined: Nov 03, 2014 9:32 am
Full Name: Ed Grigson
Contact:

Minimising storage for long term retention - an idea

Post by egrigson »

I've been asked by a client to minimise the storage requirements for long term retention of a couple of large VMs, and would like some feedback on a potential solution.

I would normally recommend using a local backup job and copying them offsite via a backup-copy job using GFS to create synthetic full backups monthly and at year end. The problem here is the backup-copy job will consist of 16 full backups - 11 monthlies and 5 end of years. In this case a full backup of the two VMs uses 400GB (after Veeam dedupe/compression) resulting in 400GB * 16 = 6.5TB.

The client has suggested that as the monthly backups are largely unchanged (estimated 25% rate of change over a month) this seems like a waste of capacity. The idea is to replace the backup-copy job with a backup job which only runs once a month and uses incrementals - so January would be a full monthly, while February, March, April etc would be monthly incrementals. This should result in approx 400GB + (11 * 100GB) = 1.5TB, plus 400GB * 5 (yearlies) = 3.5TB. Taking this concept one further we could also run yearly jobs as incrementals to minimise the required capacity further. These backups can be copied offsite using storage replication.

I have concerns around operational complexity, increased restore time, and increased risk (dependency of a chain of backups over a lengthy period) but assuming the client was happy to accept those is there any reason why this wouldn't work? Any other thoughts about pros and cons of this approach?

I'd be grateful for all feedback.

Regards,

Ed.
chrisdearden
Veteran
Posts: 1531
Liked: 226 times
Joined: Jul 21, 2010 9:47 am
Full Name: Chris Dearden
Contact:

Re: Minimising storage for long term retention - an idea

Post by chrisdearden »

it shouldn't make that much difference in terms of restore time as far as Veeam is concerned other than you would need to make the whole years chain available to do a restore ( if you are bringing back from tape this is going to take a bit longer )

the main highlight is the operational risk factor that means a month end backup will depend on the whole chain rather than a standalone GFS point.

what sort of storage where you looking to write the long term data to?
egrigson
Novice
Posts: 6
Liked: never
Joined: Nov 03, 2014 9:32 am
Full Name: Ed Grigson
Contact:

Re: Minimising storage for long term retention - an idea

Post by egrigson »

Thanks for the quick response as always Chris. A Synology NAS. I know, I know, NAS isn't recommended for backups, but this is already in place so can't be changed.

Have you seen other customers doing anything similar?

Ed.
egrigson
Novice
Posts: 6
Liked: never
Joined: Nov 03, 2014 9:32 am
Full Name: Ed Grigson
Contact:

Re: Minimising storage for long term retention - an idea

Post by egrigson »

Hmm, first potential problems - the UI for a backup job doesn't allow you to choose 'last day of the month'. You can choose 'last Saturday', or 'fourth Saturday' etc but the day won't be consistent month to month. The active-full periodic backups seem to have a similar UI with the same issue. I'm not sure how critical it would be if we ran on a given day (say fourth Saturday) rather than the last day of a month.

There's some operational complexity right there!

Ed.
chrisdearden
Veteran
Posts: 1531
Liked: 226 times
Joined: Jul 21, 2010 9:47 am
Full Name: Chris Dearden
Contact:

Re: Minimising storage for long term retention - an idea

Post by chrisdearden »

in v8 ? for some reason I thought that was one of the options added. it must have been an other period I was thinking about.

The only thinkg that springs to mind would be presenting an iSCSi lun from that box to a 2012 server and running some dedupe on it - full backups tend to dedupe pretty well together.
egrigson
Novice
Posts: 6
Liked: never
Joined: Nov 03, 2014 9:32 am
Full Name: Ed Grigson
Contact:

Re: Minimising storage for long term retention - an idea

Post by egrigson »

I'm using v7 still - I'll have a look at v8 and see if some options have been added. Interesting idea on using 2012 dedupe although that's also going down the route of added complexity.
chrisdearden
Veteran
Posts: 1531
Liked: 226 times
Joined: Jul 21, 2010 9:47 am
Full Name: Chris Dearden
Contact:

Re: Minimising storage for long term retention - an idea

Post by chrisdearden » 1 person likes this post

I'll check my instance of it as well.

its there :)

Image
dellock6
VeeaMVP
Posts: 6139
Liked: 1932 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Minimising storage for long term retention - an idea

Post by dellock6 »

are we sure a monthly incremental will not be as big as almost a full, with the additional risk of keeping the chain open?
Remember we are talking about image-based backups, so blocks do change over a month, even if maybe in the Guest OS only few files are changed.We usually assume a 2-5% dailiy change, even if the storage writes over and over again say 90% the times, at the end of the month probably we are talking about 20-30% of the original size. We are talking about 6.5 TB in total, not hundreds, so even if we "waste" 20-30% more space I will evaluate the independence of GFS restore points as an advantage in exchange for the additional space.

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
larry
Veteran
Posts: 387
Liked: 97 times
Joined: Mar 24, 2010 5:47 pm
Full Name: Larry Walker
Contact:

Re: Minimising storage for long term retention - an idea

Post by larry »

I switched to using Seagates LaCie 2big Thunderbolt 2 - hard drive array, 6tb when mirrored, 12tb if stripped. I use mirror because things happen. They direct connect with USB 3 to the Veaam servers, some which have a few of these now. In the rack they just site on top of the server. The backup speed is about the same as local disk, when full we place in vault and start a new one. I feel in 7 years USB technology will still be around, not sure of tapes or any one vendor. We encrypt using bitlocker, before we backup to them. One nice side effect is if you need to replace or upgrade a server you more the usb cable, import the backups and are done in 5 minutes with the jobs running normal that night. You can also more a job from one server to the next.
readie
Expert
Posts: 158
Liked: 30 times
Joined: Dec 05, 2010 9:29 am
Full Name: Bob Eadie
Contact:

Re: Minimising storage for long term retention - an idea

Post by readie »

egrigson wrote:Have you seen other customers doing anything similar?
Ed.
We went down a very similar route as you for the same reasons. We wanted to keep monthly/quarterly/yearly backup copies, exactly as offered in the backup copy job setup. But our backup size was 6TB, and we didn't have the space to keep 6TB monthly, quartlerly etc . . . . = 100TB+
So we kept to the backup copy job (you decided to use a backup job?) with one job doing monthly, keeping 3 restore points, and another job going quarterly (100 days was the max I could select) keeping 3 restore points.
Then annually we backup copy to an external USB.
The advantage is that these are incremental, so the size on disk is about 7TB for each (monthly and quarterly) = 14TB, rather than the 100TB+ which we were heading for (until we stopped!).
We haven't got FULL monthly going back a year, but monthlies since the last quarterly seems about right, then quarterlies going back a year.

I also wanted to use Windows 2012 Dedupe to reduce this 14TB, but keep finding references to dedupe not working on files bigger than 1TB . . . or at least advisories not to use it on files bigger than 1TB? Anyone got comments on this, as I cannot find any MS documentation saying anything about max filesize for dedupe.

Bob
Bob Eadie
Computer Manager at Bedford School, UK (since 1999).
Veeam user since 2009.
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Minimising storage for long term retention - an idea

Post by veremin »

I also wanted to use Windows 2012 Dedupe to reduce this 14TB, but keep finding references to dedupe not working on files bigger than 1TB . . . or at least advisories not to use it on files bigger than 1TB? Anyone got comments on this, as I cannot find any MS documentation saying anything about max filesize for dedupe.
This topic might be useful for you. Thanks.
Post Reply

Who is online

Users browsing this forum: No registered users and 34 guests