Discussions related to using object storage as a backup target.
Post Reply
kestis
Novice
Posts: 6
Liked: never
Joined: Dec 02, 2020 4:16 pm
Contact:

Optimizing Object Offloading

Post by kestis »

I've got a fairly large backup footprint. My understanding is that offloading to Azure requires a closed chain, which means offloading our entire backup footprint instead of just increments. Veeam's SOBR capacity tier offloading happens on a regular interval (default = 4 hours, but there's a reg value to adjust). I've tried getting offloading running with some smaller test jobs, but getting overlap with the offload tasks even when moving that interval out, so Veeam's been kicking errors about failed offloads. I'm assuming the answer to this is just to push that interval out farther or ignore those errors and check on object storage to make sure offloads are happening, but I also feel like babysitting backups so intently defeats the purpose of a lot of these automations.

My understanding is I can copy increments to the capacity tier, but if I want to actually offload from on-prem storage, I need a closed chain. Are there any options for me to offload my increments within my retention policy while keeping only a certain number of restore points on-prem without offloading my entire backup footprint except when our full backups run (e.g. Keep 90 days in total, 75 in object storage and 15 on-prem).

What can I do to optimize my storage offload and should I give up on the idea of anything less than my entire footprint?
wishr
Veteran
Posts: 3077
Liked: 453 times
Joined: Aug 07, 2018 3:11 pm
Full Name: Fedor Maslov
Contact:

Re: Optimizing Object Offloading

Post by wishr »

Hi Kertis,

May I ask you a few questions for a better understanding?

1. What kind of backup method you are using in your jobs and what is their schedule?
2. What's the end goal of using object storage in your situation?

Thanks in advance!
veremin
Product Manager
Posts: 20270
Liked: 2252 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Optimizing Object Offloading

Post by veremin »

Are there any options for me to offload my increments within my retention policy while keeping only a certain number of restore points on-prem without offloading my entire backup footprint except when our full backups run (e.g. Keep 90 days in total, 75 in object storage and 15 on-prem).
You can achieve this by:

* creating a daily backup job with weekly full backup
* setting 90 days retention for it
* adding object storage repository
* attaching it as Capacity Tier to Scale-Out Backup Repository
* enabling move policy
* setting 14 as operational restore window
* point the job to the Scale-Out Backup Repository

This seems to answer your requirements.

Thanks!
kestis
Novice
Posts: 6
Liked: never
Joined: Dec 02, 2020 4:16 pm
Contact:

Re: Optimizing Object Offloading

Post by kestis »

wishr wrote: Aug 31, 2021 3:53 pm 1. What kind of backup method you are using in your jobs and what is their schedule?
2. What's the end goal of using object storage in your situation?
1. Forward incremental runs daily with a quarterly active full. Our test jobs are daily forward incremental with monthly synthetic full.
2. End goal of object storage is to offload the brunt of our restore points so we can have a smaller on-prem footprint so we aren't filling our racks with storage appliances. We have a pretty large backup footprint and it's growing at a pretty steady clip, so getting restore points in the cloud would help extend the life of the storage we currently have available on-prem without impacting our retention policy.
kestis
Novice
Posts: 6
Liked: never
Joined: Dec 02, 2020 4:16 pm
Contact:

Re: Optimizing Object Offloading

Post by kestis »

veremin wrote: Aug 31, 2021 4:20 pm You can achieve this by:

* creating a daily backup job with weekly full backup
* setting 90 days retention for it
* adding object storage repository
* attaching it as Capacity Tier to Scale-Out Backup Repository
* enabling move policy
* setting 14 as operational restore window
* point the job to the Scale-Out Backup Repository

This seems to answer your requirements.
Thanks for the response, but this seems to still be requiring me to offload my entire backup footprint and not just increments within my retention period. Additionally, it'd require me to have no less than 2 full backup chains for each of our jobs on-prem, which is a significant amount of storage to eat. Ideally, we'd be keeping our quarterly full backups per-job that are staggered so we won't have more than 2 full chains on-prem at any time and offload increments instead of the full backup footprint.

I'm not sure if what I'm looking to do is in the realm of possibility with Veeam's offerings, but hoping this clears up my predicament just a little.

Thanks, again!
wishr
Veteran
Posts: 3077
Liked: 453 times
Joined: Aug 07, 2018 3:11 pm
Full Name: Fedor Maslov
Contact:

Re: Optimizing Object Offloading

Post by wishr »

Thank you for commenting!

The only solution I see is to perform full (or synthetic full) backups more frequently to reduce the length of the active backup chain.
veremin
Product Manager
Posts: 20270
Liked: 2252 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Optimizing Object Offloading

Post by veremin »

Thanks for the response, but this seems to still be requiring me to offload my entire backup footprint and not just increments within my retention period.
Not entire backup footprint, but inactive part of the chain that falls out of the specified operational restore window (two weeks in your case). However, it means moving both full and incremental restore points. Thanks!
Post Reply

Who is online

Users browsing this forum: lperezg and 15 guests