we want to replace tape backup with backup on Object Storage.
Now we are creating writing full backups every month for the last 2 years(24 Fulls in total). For each other 8 years we keep one Full Backup, that was written in January. In this way we keep 33 Full Backups. To make it clearer, here how it would look like in 12/2020:
I would like to change this backup plan und replace tape backup with backup on Object Storage. Considering our capacities(~150TB Full Backup each month) I would like to write every month from February an incremental Backup instead of Full Backup. In this way we would keep only 15 full and 18 incremental Backups. Here a picture:
Can it be done with the actual Veeam Features? How to set it up?
Our object storage integration is forever-incremental, so we will never send periodic full backups to object storage - only deltas. There are still full backups in object storage, but new full backups leverage blocks copied over by previous full backups (instead of copying their entire content all over again).
Thank you for this information. The most important in my case is not the data, that should be transported, but the summary storage space consumption on the object storage.
Can I have 33 restore points of a Veeam backup job on object storage without saving 33 Full Backups on object storage?
There won't be 33 full backups stored on Capacity Tier, after first full restore point is offloaded to object storage, the following restore points will reuse blocks already copied for first full restore point - Capacity Tier works in forever-incremental way (ReFS-like logic).
Woudn't it be a reliability problem if we would have in this case a chain of 1 Full Backup and 32 "incrementals" for 10 years?
Because if one block became unreadable(disc "bit error rate" or something else), the whole 10 years chain with several hundred of TB will be corrupt. Does it make sense? It this reliable enough?
Or may be there is a possibility to save multiple identical blocks(as in my picture "Expectations")?
Some additional information:
- We want to use S3 Object Lock for these GFS-Backups
- If it's possible to use just the Backup Job without Backup Copy, we would prefer it(Veeam B&R is up to date)
basic39 wrote: ↑May 08, 2020 8:30 pmWoudn't it be a reliability problem if we would have in this case a chain of 1 Full Backup and 32 "incrementals" for 10 years?
Because if one block became unreadable(disc "bit error rate" or something else), the whole 10 years chain with several hundred of TB will be corrupt. Does it make sense? It this reliable enough?
You're probably keep thinking general-purpose storage here. For this one, indeed we do recommend storing GFS backups as standalone fulls, for the reason you have just explained.
However, object storage keeps multiple copies of each object natively, so a great deal of redundancy is built into the storage itself. This allows you not to worry about redundancy at the application level. In object storage, each object copy is check-summed, and when an object is served, its content is validated against this checksum for correctness. In case of data corruption issues, another copy of this object is served transparently instead.
You're probably keep thinking general-purpose storage here. For this one, indeed we do recommend storing GFS backups as standalone fulls, for the reason you have just explained.
How can it be implemented? Because as you says, the Capacity Tier works in forever-incremental way.
So how can multiple standalone fulls be stored on the same object storage? Multiple Scale-outs and buckets with backup-copy jobs?
I think you either completely misunderstood my post, or just did not read the second part (but only the part you quoted). Because the second part specifically explains why with object storage, you don't need to worry about this issue at all.
Yes, now I see, that I misunderstood your post. Thank you for the provided Information.
Do I understand it correct, that I would need to have two Scale-Outs and two buckets for our „ Immutability“ requirements?
The Retention-Time of the monthly backups should be 2 months, of the yearly backups - 10 years. The only way I know is create two different Scale-Outs with different Immutability settings.
The second question - the max value in the option „make recent backups immutable for..“ is 999 days. Can I somehow extend it to 10 years?
Our immutable backups feature is designed specifically for protecting your recent (most important) backups against ransomware or insider threat. It cannot be used to make GFS backups immutable for their entire lifetime. I recommend you instead do VTL to Glacier Deep Archive (or similar object storage offering) for those yearly backups that you need to store for 10 years.